Skip to content

Tokyo AI & Society Symposium

Published:
October 30, 2017
Author:
Viktoriya Krakovna

Contents

I just spent a week in Japan to speak at the inaugural symposium on AI & Society – my first conference in Asia. It was inspiring to take part in an increasingly global conversation about AI impacts, and interesting to see how the Japanese AI community thinks about these issues. Overall, Japanese researchers seemed more open to discussing controversial topics like human-level AI and consciousness than their Western counterparts. Most people were more interested in near-term AI ethics concerns but also curious about long term problems.

The talks were a mix of English and Japanese with translation available over audio (high quality but still hard to follow when the slides are in Japanese). Here are some tidbits from my favorite talks and sessions.

Danit Gal’s talk on China’s AI policy. She outlined China’s new policy report aiming to lead the world in AI by 2030, and discussed various advantages of collaboration over competition. It was encouraging to see that China’s AI goals include “establishing ethical norms, policies and regulations” and “forming robust AI safety and control mechanisms”. Danit called for international coordination to help ensure that everyone is following compatible concepts of safety and ethics.

Next breakthrough in AI panel (Yasuo Kuniyoshi from U Tokyo, Ryota Kanai from Araya and Marek Rosa from GoodAI). When asked about immediate research problems they wanted the field to focus on, the panelists highlighted intrinsic motivation, embodied cognition, and gradual learning. In the longer term, they encouraged researchers to focus on generalizable solutions and to not shy away from philosophical questions (like defining consciousness). I think this mindset is especially helpful for working on long-term AI safety research, and would be happy to see more of this perspective in the field.

Long-term talks and panel (Francesca Rossi from IBM, Hiroshi Nakagawa from U Tokyo and myself). I gave an overview of AI safety research problems in general and recent papers from my team. Hiroshi provocatively argued that a) AI-driven unemployment is inevitable, and b) we need to solve this problem using AI. Francesca talked about trustworthy AI systems and the value alignment problem. In the panel, we discussed whether long-term problems are a distraction from near-term problems (spoiler: no, both are important to work on), to what extent work on safety for current ML systems can carry over to more advanced systems (high-level insights are more likely to carry over than details), and other fun stuff.

Stephen Cave’s diagram of AI ethics issues. Helpfully color-coded by urgency.

Luba Elliott’s talk on AI art. Style transfer has outdone itself with a Google Maps Mona Lisa.

There were two main themes I noticed in the Western presentations. People kept pointing out that AlphaGo is not AGI because it’s not flexible enough to generalize to hexagonal grids and such (this was before AlphaGo Zero came out). Also, the trolley problem was repeatedly brought up as a default ethical question for AI (it would be good to diversify this discussion with some less overused examples).

The conference was very well-organized and a lot of fun. Thanks to the organizers for bringing it together, and to all the great people I got to meet!

This content was first published at futureoflife.org on October 30, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram