Skip to content
All Newsletters

FLI July, 2018 Newsletter

Published
August 4, 2018
Author
Revathi Kumar

Contents

FLI July, 2018 Newsletter

AI Companies and Researchers Pledge Not to Develop Lethal Autonomous Weapons


After years of voicing concerns, AI leaders have, for the first time, taken concrete action against lethal autonomous weapons, signing a pledge to neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.

The pledge has been signed to date by over 220 AI-related companies and organizations from 36 countries, and 2,800 individuals from 90 countries. Signatories of the pledge include Google DeepMind, University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI (EurAI), the Swedish AI Society (SAIS), Demis Hassabis, British MP Alex Sobel, Elon Musk, Stuart Russell, Yoshua Bengio, Anca Dragan, and Toby Walsh.

FLI president Max Tegmark announced the pledge in a talk at the annual International Joint Conference on Artificial Intelligence (IJCAI) this month. The pledge has been covered by CNNThe GuardianNPRForbesThe Washington Post, and Newsweek.

If you work with artificial intelligence in any way, and if you believe that the final decision to take a life should remain a human responsibility rather than falling to a machine, then please consider signing this pledge, either as an individual or on behalf of your organization.

Podcast: Six Experts Explain the Killer Robots Debate
with Paul Scharre, Toby Walsh, Richard Moyes, Mary Wareham, Bonnie Docherty, and Peter Asaro


Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated. 

In this month’s podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre, artificial intelligence professor Toby Walsh, Article 36 founder Richard Moyes, Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty, and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro.

Topics discussed in this episode include:

  • the history of semi-autonomous weaponry in World War II and the Cold War (including the Tomahawk Anti-Ship Missile)
  • how major military powers like China, Russia, and the US are imbuing AI in weapons today
  • why it’s so difficult to define LAWS and draw a line in the sand
  • the relationship between LAWS proliferation and war crimes
  • comparing LAWS to blinding lasers and chemical weapons
  • why there is hope for the UN to address this issue

You can listen to the podcast here, and check us out on SoundCloudiTunesGoogle Play, and Stitcher.

FLI Announces 2018 AGI Grant Winners


After launching the first AI safety grant round in 2015, which funded 37 research teams, FLI has launched a second AI safety grant round. This time, FLI has allocated $2 million donated by Elon Musk to 10 grant winners to fund research that anticipates artificial general intelligence (AGI) and how it can be designed beneficially.

Grant topics include: training multiple AIs to work together and learn from humans about how to coexist, training AI to understand individual human preferences, understanding what “general” actually means, incentivizing research groups to avoid a potentially dangerous AI race, and many more. You can read more about the grant winners here.

As Max Tegmark said, “I’m optimistic that we can create an inspiring high-tech future with AI as long as we win the race between the growing power of AI and the wisdom with which the manage it. This research is to help develop that wisdom and increasing the likelihood that AGI will be best rather than worst thing to happen to humanity.”

AI Alignment Podcast with Lucas Perry

AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy


What role does cyber security play in AI alignment and safety? What is AI completeness? What is the space of mind design and what does it tell us about AI safety? How does the possibility of machine qualia fit into this space? Can we leak proof the singularity to ensure we are able to test AGI? And what is computational complexity theory anyway?

In this podcast, Lucas spoke with Roman Yampolskiy, a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. He is an author of over 100 publications including multiple journal articles and books.

Topics discussed in this episode include:

  • cyber security applications to AI safety
  • the control problem
  • the ethics of and detecting qualia in machine intelligence
  • machine ethics and it’s role or lack thereof in AI safety
  • simulated worlds and if detecting base reality is possible
  • AI safety publicity strategy
To listen to the podcast, click here, or find us on SoundCloud, iTunesGoogle Play and Stitcher.

AI Safety Research & Global AI Policy



AI Policy Pages


In order to realize its enormous potential, the challenges associated with AI development have to be addressed. This page highlights three complimentary resources to help decision makers navigate AI policy: A global landscape of national and international AI strategies; a list of prominent AI policy challenges and key recommendations that have been made to address them; and a list of AI policy resources for those hoping to learn more



Machine Reasoning and the Rise of Artificial General Intelligences: An Interview With Bart Selman


If we already have machines that are proficient at learning through pattern recognition, how long will it be until we have machines that are capable of true reasoning, and how will AI evolve once it reaches this point?

What We’ve Been Up to This Month


Max Tegmark gave an invited talk titled “Beneficial Intelligence & Intelligible Intelligence” at the IJCAI/ECAI AI conference in Stockholm, Sweden on July 18. You can watch the talk here.

In addition, Max’s book on the opportunities & challenges of AI is now available in 11 languages and just came out in paperback, peaking at #4 on the UK amazon ranking.

FLI in the News

“The move is the latest from concerned scientists and organisations to highlight the dangers of handing over life and death decisions to AI-enhanced machines. It follows calls for a preemptive ban on technology that campaigners believe could usher in a new generation of weapons of mass destruction.”

CNN: Leading AI researchers vow to not develop autonomous weapons
“‘We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody,’ said Anthony Aguirre, who teaches physics at the University of California-Santa Cruz and signed the letter.”

NPR: AI Innovators Take Pledge Against Autonomous Killer Weapons
“During the annual International Joint Conference on Artificial Intelligence in Stockholm on Wednesday, some of the world’s top scientific minds came together to sign a pledge that calls for ‘laws against lethal autonomous weapons.'”

THE WASHINGTON POST: Tech leaders: Killer robots would be ‘dangerously destabilizing’ force in the world
“Among them are billionaire inventor and OpenAI founder Elon Musk, Skype co-founder Jaan Tallinn, artificial intelligence researcher Stuart Russell, as well as the three founders of Google DeepMind — the company’s premier machine learning research group.”

FORBES: AI in Startups
“..narrow AI will disrupt a significant portion of the work force within this generation, and empowering entrepreneurs may be a worthy goal for mitigating this inevitability, as well as a productive strategy for regional and national economic growth.”

JAXENTER: How to develop machine learning responsibly
“Machine learning inevitably adds black boxes to automated systems and there is clearly an ethical debate about the acceptability of appropriating ML for a number of uses. The risks can be mitigated with five straightforward principles.


If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: A pause didn’t happen. So what did?

Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
April 2, 2024

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram