At a press conference at the IJCAI AI-meeting in Buenos Aires today, Stuart Russell and Toby Walsh announced an open letter on autonomous weapons that we’ve helped organize. We’re delighted that it’s been signed by over 1700 AI/robotics researchers and over 11000 others. Signatories include 14 current and past presents of AI/robotics organizations (AAAI, IEEE-RAS, […]
About Max Tegmark
Known as “Mad Max” for his unorthodox ideas and passion for adventure, his scientific interests range from precision cosmology to the ultimate nature of reality, all explored in his new popular book “Our Mathematical Universe”. He is an MIT physics professor with more than two hundred technical papers and has featured in dozens of science documentaries. His work with the SDSS collaboration on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.” He is founder (with Anthony Aguirre) of the Foundational Questions Institute.
Entries by Max Tegmark
I just had the pleasure of discussing our new AI safety research program on National Public Radio. I was fortunate to be joined by two of the winners of our grants competition: CMU roboticist Manuela Veloso and Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence.
After a grueling expert review of almost 300 grant proposals from around the world, we are delighted to announce the 37 research teams that have been recommended for funding to help keep AI beneficial. We plan to award these teams a total of about $7M from Elon Musk and the Open Philanthropy Project over the […]
Inspired by our Puerto Rico AI conference and open letter, a team of economists and business leaders have now launched their own open letter specifically on how to make AI’s impact on the economy beneficial rather than detrimental. It includes lists of specific policy suggestions.
CBS News interviewed me for this morning’s segment on the future of AI, which avoided the tired old “robots-will-turn-evil” message and reported on the latest DARPA challenge. 21
Nature just published four interesting perspectives on AI Ethics, including an article and podcast on Lethal Autonomous Weapons by Stuart Russell. 38
Stephen Hawking, who serves on our FLI Scientific Advisory Board, just gave an inspiring and thought-provoking talk that I think of as “A Brief History of Intelligence”. He spoke of the opportunities and challenges related to future artificial intelligence at a Google’s conference outside London, and you can watch it here. 1545
Bill Gates and Elon Musk recently discussed the future of AI, and Bill said he shared Elon’s safety concerns. Regarding people dismissing AI concerns, he said “How can they not see what a huge challenge this is?”. We’re honored that he also referred to our new FLI research program on beneficial AI as “absolutely fantastic”. […]
We were quite curious to see how many applications we’d get for our Elon-funded grants program on keeping AI beneficial, given the short notice and unusual topic. I’m delighted to report that the response was overwhelming: about 300 applications for a total of about $100M, including a great diversity of awesome teams and projects from […]
Steve Wozniak, without whom I wouldn’t be typing this on a Mac, has now joined the growing group of tech pioneers (most recently his erstwhile arch-rival Bill Gates) who feel that we shouldn’t dismiss concerns about future AI developments. Interestingly, he says that he had long dismissed the idea that machine intelligence might outstrip human […]
The Future of Technology: Benefits and Risks FLI was officially launched Saturday May 24, 2014 at 7pm in MIT auditorium 10-250 – see video, transcript and photos below. The coming decades promise dramatic progress in technologies from synthetic biology to artificial intelligence, with both great benefits and great risks. Please watch the video below for a fascinating discussion about […]