The Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety is designed to support promising researchers for postdoctoral appointments who plan to work on AI existential safety research. Funding is for three years subject to annual renewals based on satisfactory progress reports. For host institutions in the US, UK, or Canada, the Fellowship includes an annual $80,000 stipend and a fund of up to $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the fellowship amount will be adjusted to match local conditions. Fellows will be invited to workshops where they will be able to interact with other researchers in the field.
Questions about the fellowship or application process not answered on this page should be directed to email@example.com
Purpose and eligibility:
The purpose of the fellowship is to fund talented postdoctoral researchers to work on AI existential safety research. To be eligible, applicants should identify a mentor (normally a professor) at the host institution (normally a university) who commits in writing to mentor and support the applicant in their AI existential safety research if a Fellowship is awarded. This includes ensuring that the applicant has access to office space and is welcomed and integrated into the local research community. Fellows are expected to participate in annual workshops and other activities that will be organized to help them interact and network with other researchers in the field.
Applicants will submit a detailed CV, a research statement, a summary of previous and current research, the names and email addresses of three referees, and the proposed host institution and mentor (whose agreement must have been secured beforehand).
The research statement should include the applicant's reason for interest in AI existential safety, a technical specification of the proposed research, and a discussion of why it would reduce the existential risk of advanced AI technologies or otherwise meet our eligibility criteria.
The proposed mentor will be asked to submit a letter confirming that they will supervise the applicant to work on AI existential safety research as per above, and that the applicant will be employed by the host institution if the Fellowship is offered.
There are no geographic limitations on applicants or host universities. We welcome applicants from a diverse range of backgrounds, and we particularly encourage applications from women and underrepresented minorities.
Timing for 2022:
The deadline for application is January 2, 2023 at 11:59 pm ET. After an initial round of deliberation, those applicants who make the short-list will then go through an interview process before fellows are finalized. Offers will be made no later than the end of March 2022.
AI Existential Safety Research Definition
No results to show yet.