Skip to content
All Newsletters

FLI June, 2018 Newsletter

Published
August 10, 2018
Author
Revathi Kumar

Contents

FLI June, 2018 Newsletter

Expanding the AI Conversation: FLI Podcasts & TED Talks


This month, FLI president Max Tegmark’s TED Talk on the risks and opportunities of artificial intelligence was released. In the talk, Max separates the real opportunities and threats of artificial intelligence from the myths, describing the concrete steps we should take today to ensure that AI ends up being the best — rather than worst — thing to ever happen to humanity.

AI Alignment Podcast with Lucas Perry





Starting in April of this year, FLI’s Lucas Perry began a podcast series on the value alignment problem with artificial intelligence. Lucas interviews technical and non-technical researchers in machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. You can look for new episodes on the 15th of every month, or the first Monday after.

On July 16th, Lucas will release his new episode with Dr. Roman Yampolskiy, author of the forthcoming book Artificial Intelligence Safety and Security. You can find this episode and future episodes on SoundCloudiTunesGoogle Play and Stitcher.

FLI Monthly Podcast with Ariel Conn





Each month since July 2016, FLI’s Ariel Conn has hosted guests on the FLI podcast to discuss all of FLI’s major concern areas. Episodes cover risks and opportunities in AI and autonomous weapons, nuclear weapons, biotechnology and climate change. You can look for new episodes on the last business day of each month.

  • New episode: Mission AI – Giving a Global Voice to the AI Discussion with Charlie Oliver and Randi Williams (June 2018)
    • Topics discussed in this episode include: How to inject diversity into the AI discussion; the launch of Mission AI and bringing technologists and the general public together; how children relate to AI systems, like Alexa; why the Internet and AI can seem like “great equalizers,” but might not be; and how we can bridge gaps between the generations and between people with varying technical skills.

You can browse other episodes on SoundCloudiTunesGoogle Play and Stitcher.

Check out the FLI YouTube channel!


In addition to our podcasts, FLI’s YouTube channel features videos from conferences and workshops, videos we’ve presented at the United Nations, and much more.

You can learn what Elon Musk, Nick Bostrom, and other great minds have to say about superintelligence risk here.

Highlighting AI Research


As part of our effort to mainstream AI safety research, FLI has worked over the past few years to make new research accessible and exciting for the general public. FLI has already been covering research by our grant recipients (which you can read more about here), and now we’re beginning to add more summaries of other important AI safety research. This month, you can read about the classic AI safety paper, Concrete Problems in AI Safety, as well as a new paper from DeepMind.



How Will the Rise of Artificial Superintelligences Impact Humanity?


The world has yet to see an artificial superintelligence (ASI) — a synthetic system that has cognitive abilities which surpass our own across every relevant metric. But technology is progressing rapidly, and many AI researchers believe the era of the artificial superintelligence may be fast approaching.



A Summary of Concrete Problems in AI Safety


In the paper Concrete Problems in AI Safety, the authors explore the problem of accidents — unintended and harmful behavior — in AI systems, and they discuss different strategies and on-going research efforts to protect against these potential issues. We revisit these five topics here, summarizing them from the paper, as a reminder that these problems are still major issues that AI researchers are working to address.



AI Safety: Measuring and Avoiding Side Effects Using Relative Reachability


How can we measure side effects in a general way that’s not tailored to particular environments or tasks, and incentivize the agent to avoid them?

What We’ve Been Up to This Month


Jessica Cussins and Richard Mallah attended the Foresight Institute’s “AI Coordination: Great Powers” strategy meeting in San Francisco on June 7th. A white paper on the event will be released soon.

Anthony Aguirre and Jessica Cussins attended Effective Altruism Global in San Francisco on June 9th. Content in the conference was aimed at existing EA community members who already have a solid understanding of effective altruism, but who wanted to gain skills, network, master more complex problems, or move into new roles.

FLI in the News


THE NEW YORK TIMES: Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots
“As the tech moguls disagree over the risks presented by something that doesn’t exist yet, all of Silicon Valley is learning about unintended consequences of A.I.”

VENTURE BEAT: Physicist Max Tegmark on the promise and pitfalls of artificial intelligence
“Tegmark recently spoke about AI’s potential — and its dangers — at IPsoft’s Digital Workforce Summit in New York City. After the keynote address, we spoke via phone about the challenges around AI, especially as they relate to autonomous weapons and defense systems like the Pentagon’s controversial Project Maven program.”

SINGULARITY HUB: Why We Need to Fine-Tune Our Definition of Artificial Intelligence
“We urgently need to move away from an artificial dichotomy between techno-hype and techno-fear; oscillating from one to the other is no way to ensure safe advances in technology.”

CIO: Will artificial intelligence bring a new renaissance?
“Society needs to seriously rethink AI’s potential, its impact to both our society and the way we live.”

AZO NANO: AI-Based Technique Could Accelerate Creation of Specialized Nanoparticles
“MIT physicists have developed a new method that could someday provide a way to customize multilayered nanoparticles with preferred properties, potentially for use in cloaking systems, displays, or biomedical devices.”


If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.

Note from FLI


We appreciate that you have signed up to receive updates from FLI, and we take your privacy seriously. If you wish to stop receiving our emails, you always have the right and power to unsubscribe, which you can do from the link at the bottom of the email. To see our updated policy, which is GDPR-compliant, click here.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024

Future of Life Institute Newsletter: Wrapping Up Our Biggest Year Yet

A provisional agreement is reached on the EU AI Act, highlights from the past year, and more.
December 22, 2023
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram