Current Job Postings From FLI and Our Partner Organizations:
MIRI’s mission is to ensure that the creation of smarter-than-human intelligence has a positive impact. Our strategic focus is our technical research agenda on “superintelligence alignment,” composed of numerous subproblems in AI, logic, decision theory, and other fields.
Our technical research program currently employs four full-time research fellows, fosters collaboration with our research associates and others, runs several research workshops each year, and funds independently-organized MIRIx workshops around the world.
We’re seeking multiple research fellows who can work with our other research fellows to solve open problems related to superintelligence alignment, and prepare those results for publication. For those with some graduate study or a Ph.D. in a relevant field, the salary starts at $65,000 to $75,000 per year, depending on experience. For more senior researchers, the salary may be substantially higher, depending on experience. All full-time employees are covered by our company health insurance plan. Visa assistance is available if needed.
Ours is a young field. Our current research agenda includes work on tiling agents, logical uncertainty, decision theory, corrigibility, and value learning, but those subtopics do not exhaust the field. Other research topics will be seriously considered if you can make the case for their tractability and their relevance to the design of self-modifying systems which stably pursue humane values.
This is not a limited-term position. The ideal candidate has a career interest in these research questions and aims to develop into a senior research fellow at MIRI, or aims to continue these avenues of research at another institution after completing substantial work at MIRI.
Some properties you should have
- Published research in computer science, logic, or mathematics.
- Enough background in the relevant subjects (computer science, logic, etc.) to understand MIRI’s technical publications.
- A proactive research attitude, and an ability to generate productive new research ideas.
A formal degree in mathematics or computer science is not required, but is recommended.
For more details, please visit here!
Center for Effective Altruism
The Centre for Effective Altruism helps to grow and maintain the effective altruism movement. Our mission is to
- create a global community of people who have made helping others a core part of their lives, and who use evidence and scientific reasoning to figure out how to do so as effectively as possible; and
- make the advancement of the wellbeing of all a worldwide intellectual project, doing for the pursuit of good what the Scientific Revolution did for the pursuit of truth.
The Centre for Effective Altruism (CEA) is looking for two research assistants (one full-time, one part-time) to help develop the intellectual foundations of effective altruism. In this role, you would work very closely with either Dr Toby Ord or Prof Peter Singer.
With Dr Toby Ord, the primary focus would be to help with the creation of a book on existential risk. This book would aim to be the go-to resource for existential risk: it will examine the arguments for mitigating existential risk, the types of existential risk, and the interventions aimed at reducing such risks. The ultimate aim is to make concern for existential risk a regular part of policy conversations, in just the same way that concern for the environment is.
With Prof Peter Singer, the primary focus would be to help with the creation of a book, co-authored with Frances Kissling, on the question of overpopulation: whether it’s a problem, what its relevance for international development is, and what actions international organisations should be taking with respect to issues of population. Other duties would include helping Prof Singer with his Project Syndicate column and his regular speaking duties. The role may also involve helping with the creation of a book on the relationship between Buddhism and utilitarian ethics.
Applications for this position must be recieved no later than Wednesday, June 14th 2017, 1:00 am BST
Please contact email@example.com if you have questions.
Future of Humanity Institute
The Future of Humanity Institute is a leading research centre in the University of Oxford looking at big-picture questions for human civilization. We seek to focus our work where we can make the greatest positive difference. Our researchers regularly collaborate with governments from around the world and key industry groups working on artificial intelligence. To read more about the institute’s research activities, please see http://www.fhi.ox.ac.uk/research/research-areas/.
Applications are invited for a full time Research Assistant within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 6 months from the date of appointment.
Reporting to the Director of Research, the postholder will provide research support for Senior Researchers at the Future of Humanity Institute (FHI).
The postholder’s main responsibilities will include: undertaking comprehensive and systematic literature reviews and writing up the results for publication or for presentation at conferences or public meetings; contributing to research publications, book chapters and reviews, editing researchers’ draft manuscripts and developing them into book chapters, academic papers and reviews; proofreading and answering specific research questions at the request of researchers.
Applicants will have a degree in a field related to economics, mathematics, physical sciences, computer science, philosophy, political science, or international governance, and will have sufficient specialist knowledge in existential risk to work within established research programmes. Excellent communication skills, with the ability to write text that can be published, and present data at conferences are also required.
Applicants will be required to upload a CV and supporting statement as part of their online application.
Click here to apply for this post and for further details, including the job description and selection criteria.
The FHI seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Cooperative Reinforcement Learning, Learning the Preferences of Ignorant, Inconsistent Agents, Learning the Preferences of Bounded Agents, and Safely Interrupible Agents. The internship will give the opportunity to work on a specific project. Interns at FHI have worked on software for Inverse Reinforcement Learning, on a framework for RL with a human teacher, and on RL agents that do active learning. You will also get the opportunity to live in Oxford – one of the most beautiful and historic cities in the UK.
The ideal candidate will have a background in machine learning, computer science, statistics, mathematics, or another related field. Our internships are open to outstanding students about to enter the final year of their undergraduate degree or final year of a Master’s degree or who have recently completed a PHD.
This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.
There are no deadlines for applications but it is best to contact us at least 3-4 months before the intended start of the internship, especially if you would require a visa to work in the UK.
- You should be able to undertake research at post-graduate level in the area of AI safety.
- You should be fluent in English
- You must be available to come to Oxford for approximately 12 weeks, (please indicate the period when you would be available when you apply).
To apply, please send a CV and a short statement of interest (including relevant experience in machine learning and any other programming experience) to firstname.lastname@example.org.
For more details, please visit http://www.fhi.ox.ac.uk/vacancies/
Leverhulme Centre for the Future of Intelligence
The Leverhulme Centre for the Future of Intelligence (CFI) is a new, highly interdisciplinary research centre, addressing the challenges and opportunities of future development of artificial intelligence (AI), in both the short and long term. Funded by the Leverhulme Trust for 10 years, it is based in Cambridge, with partners in Oxford, Imperial College, and UC Berkeley. The Centre will have close links with industry partners in the AI field, and with policymakers, as well as with many academic disciplines. It will also work closely with a wide international network of researchers and research institutes.
The CFI invites applications for a Postdoctoral Research Associate in an area related to the impact, ethics or nature of AI. The appointment will be for 3 years, and is based in Cambridge.
This is a new post that will permit the role-holder to pursue their own research objectives, and, at the same time, to develop new projects and programmes acting as CFI’s ‘Research Programme Coordinator’, working closely with the Centre’s Executive Director. We are seeking an ambitious candidate with initiative, excellent organisation skills and a broad intellectual range, who will play a central role in developing CFI into a world-class research centre.
We would particularly welcome candidates whose research interests lie in the fields of AI and health, AI and the future of work, AI and gender, AI and trust or AI and the law, although we will also consider strong applications from candidates with interests in the impact of AI in other domains.
The successful candidate must have submitted their PhD by the time of appointment; if they have not yet been awarded their PhD, they will initially be appointed as a Research Assistant (Salary £25,298 – £29,301), and be promoted to Research Associate (Salary £29,301 – £38,183) on award of the PhD. In addition, experience in research project/programme development and experience in management of events, people or projects would be very welcome, as would strong interest in engagement with policy and technology communities.
Fixed-term: The funds for this post are available for 3 years in the first instance.
To apply online for this vacancy and to view further information about the role, please visit: http://www.jobs.cam.ac.uk/job/12964. This will take you to the role on the University’s Job Opportunities pages. There you will need to click on the ‘Apply online’ button and register an account with the University’s Web Recruitment System (if you have not already) and log in before completing the online application form.
Please upload in the Upload section of the online application (1) your CV; (2) a Covering Letter of no more than 1,500 words, outlining a proposed research direction, and explaining how this proposal and your skills would contribute to this project and CFI more broadly; and (3) a Sample of Writing of no more than 5,000 words that demonstrates your suitability for this project. If you upload any additional documents, we will not be able to consider these as part of your application.
The closing date for applications is 5 April 2017. If you have any questions about this vacancy, please contact Susan Gowans at email@example.com.
Please quote reference GO11490 on your application and in any correspondence about this vacancy.
The University values diversity and is committed to equality of opportunity.
The University has a responsibility to ensure that all employees are eligible to live and work in the UK.
Click here to apply!
Postdoctoral researcher at CHCAI (flexible start date)
CHCAI at Berkeley has openings for one or more postdoctoral researchers. Successful candidates will work with the CHCAI Director, Stuart Russell, or with one of the Berkeley co-Principal Investigators, Pieter Abbeel, Anca Dragan, and Tom Griffiths. There will also be opportunities to collaborate with CHCAI investigators at Cornell (Bart Selman, Joe Halpern) and Michigan (Michael Wellman, Satinder Singh), as well as with groups at Cambridge, Oxford, and Imperial College through the Leverhulme Centre for the Future of Intelligence.
Candidates need not have done previous work on the AI control problem but must have (or be about to obtain) a PhD in a relevant technical discipline (computer science, statistics, mathematics, or theoretical economics) and a record of high-quality published research. A solid understanding of current methods in AI and statistical learning would be an advantage. If you fit this description and would like to arrange a conversation with someone at the Center about what it would be like to work here, email Andrew Critch (firstname.lastname@example.org) with your CV attached.
If you choose to apply, full applications should be mailed to email@example.com and should include a CV, the names and contact details of three academic referees, and a one-page statement of interest describing in general terms the kind of research you would like to undertake.
Center for the Study of Existential Risk
The Centre for the Study of Existential Risk is an interdisciplinary research centre focused on the study of risks threatening human extinction that may emerge from technological advances. CSER aims to combine key insights from the best minds across disciplines to tackle the greatest challenge of the coming century: safely harnessing our rapidly-developing technological power.
An existential risk is one that threatens the existence of our entire species. The Cambridge Centre for the Study of Existential Risk (CSER) — a joint initiative between philosopher Huw Price, cosmologist Martin Rees, and software entrepreneur Jaan Tallinn — was founded on the conviction that these risks require a great deal more scientific investigation than they presently receive. The Centre’s aim is to develop a new science of existential risk, and to develop protocols for the investigation and mitigation of technology-driven existential risks.
No current job postings for CSER, but you can still get involved!
The Open Philanthropy Project
The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.
Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give.
Global Catastrophic Risk Institute
The Global Catastrophic Risk Institute (GCRI) is a nonprofit think tank specializing on the topic of global catastrophic risk (GCR). GCRI works with researchers from many academic disciplines and professionals from many sectors. GCRI leads research, education, and professional networking on GCR. GCRI research aims to identify and assess the most effective ways of reducing the risk of global catastrophe, as well as the issues raised by GCR. GCRI education aims to raise awareness and understanding about global catastrophic risk among students, professionals, and most of all by the general public.
No current job postings for GCRI, but you can still get involved!
80,000 Hours is an Oxford, UK-based organization that conducts research on the careers with positive social impact and provides career advice. It provides this advice online, through one-on-one advice sessions and through a community of like-minded individuals.
We’re not currently focused on hiring, though we’re always interesting in hearing from people dedicated to our mission who’d like to join the team full time. If you’d like to express interest in joining, fill out this short form.
Unfortunately we don’t offer internships or part-time volunteer positions.