Skip to content

Full Transcript: Understanding Artificial General Intelligence — An Interview With Dr. Hiroshi Yamakawa

Published:
October 12, 2017
Author:
a guest blogger

Contents

Click here to see this page in other languages : Japanese    Russian

Hiroshi Yamakawa is the Director of Dwango AI Laboratory, Director, the Chairperson of the Whole Brain Architecture Initiative (WBAI), a specified non-profit organization and Chief Editor of the Japanese Society for Artificial Intelligence (JSAI). He is also the Visiting Professor at the Graduate School of Information Systems the University of Electro-Communications, a Fellow Researcher at the Brain Science Institute at Tamagawa University, a Senior Researcher at the Keio Research Institute at SFC, a Visiting Researcher at the Tokyo University and a Visiting Researcher at the Artificial Intelligence Research Center of AIST. He is specialized in AI, in particular, cognitive architecture, concept acquisition, neurocomputing, bioinformatics and proxy voting technology. He is one of the founders of the Whole Brain Architecture Seminar and the SIG AGI (artificial general intelligence) in Japan, and is one of the leading researchers working on AGI in Japan.

We sat down with Dr. Yamakawa to ask him about his career, where AGI is headed, how to handle AI safety, and how the Japanese AI community is thinking about these issues. 

FLI: What was the reason that you decided to pursue a career in AI research?

HY: When I was a high school student, I had an interest in both physics and psychology. After much consideration, I eventually chose physics in college. As a fourth-year college student in the 1980s — the time of the so-called second AI boom — I had just begun to hear about AI from my colleagues. In the symbolic AI of those days, the world was described by symbols and people tried to build intelligence by manipulating those symbols.  However, I could not accept symbolic AI, so I chose physics for my master’s studies.

Then, around the end of the ‘80s, the neural network boom came about. Hinton’s group, which is still well-known, was just becoming active in the field at the time. My intuition told me that this new field of AI would be a better field of research, and in 1989 I shifted my focus to neural networks for the remaining three years of my PhD studies. I was interested in how people formed their own sense of values, and because of that, I started research in reinforcement learning around 1990. I then wrote my thesis entitled “Intelligent System Based on Reinforcement Learning.” This was actually the same time that Chris Watkins had introduced Q-learning, but since information was not immediately circulated on the internet at the time, it took about two years for me to find out. I then thought to myself, “Ah, there are people out there thinking the same thing.”

In 1992, I continued my research at Fujitsu Laboratories. By 1994, I had started work on autoencoder neural network research, but only managed to come up with a rudimentary model. It was afterwards in the latter half of the 1990s when I realized the importance of creative intelligence, the term which has now essentially evolved into recursive self-improvement or recursive AI. At that point, the term “singularity” was not yet widespread, and it took me until the late 2000s to realize that “someone else has already thought of this quite some time ago.”

In the end, my interest in psychology in high school and realizing the possibilities of neural networks during the boom prompted me to start working in AI research.

FLI: Why did the Dwango Artificial Intelligence Laboratory decide to make a large investment in this area? When do they expect a return on investment?

HY: Do you know the Japanese game of Shogi (Japanese chess)? It is a game that resembles chess, but pieces taken from your opponent can be used as your own, thereby increasing complexity as the game unfolds. In 2014, AI finally caught up to the top Shogi player.

Since 2010, the Dwango has been hosting an event called Den’ousen (lit. Electronic King War, a play on the word Dennou, meaning “cyberbrain”), where professional Shogi players and AI computer programs play against each other. The gentleman who has been organizing this event is Mr. Kawakami, who was then already the Chairman of Dwango in his mid-forties. He was able to see that it was gradually becoming more difficult for professional Shogi players to beat the computer. Seeing this tendency, and influenced by his management experience, he started to think about opening an AI lab.

Matsuo Yutaka at Tokyo University and Yuuji Ichisugi at the National Institute of Advanced Industrial Science and Technology (AIST) started activity with “Whole Brain Architecture,” in order to achieve the AI that we had in mind. We held the first open seminar related to this topic on December 19, 2013,  and the group now has over 4,300 participants on Facebook.  In 2014, Mr. Kawakami participated in this whole brain architecture open seminar, and I received an offer regarding the Dwango Artificial Intelligence Laboratory. When we established the Dwango Artificial Intelligence Laboratory in October 2014, we anticipated that AI would progress but not as explosively as it has.

Considering that background, the Dwango Artificial Intelligence Laboratory, rather than looking to make a profit, started with the goal of making a long-term investment. Up to now, we have been working on the development of artificial general intelligence (AGI) in small teams with the thought that it will have a significant impact on the future.

Usable AI that has been developed up to now is essentially for solving specific areas or addressing a particular problem. Rather than just solving a number of problems using experience, AGI, we believe, will be more similar to human intelligence that can solve various problems which were not assumed in the design phase. We can also think of AGI as “a technological target of AI which exceeds human intelligence.”

If we look at the world right now, in 2015, the number of organizations announcing that they are striving to develop AGI has doubled. Some of the primary organizations include GoodAI, DeepMind, and OpenAI. Why would so many organizations now want to tackle the challenge of developing AGI? In the end, it seems that the emergence of deep learning is the reason.

The field of AI has traditionally progressed with symbolic logic as its center. It has been built with knowledge defined by developers and manifested as AI that has a particular ability. This looks like “adult” intelligence ability. From this, programming logic becomes possible, and the development of technologies like calculators has steadily increased. On the other hand, the way a child learns to recognize objects or move things during early development, which corresponds to “child” AI, is conversely very difficult to explain. Because of this, programming some child-like behaviors is very difficult, which has stalled progress. This is also called Moravec’s Paradox.

However, with the advent of deep learning, development of this kind of “child” AI has become possible by learning from large amounts of training data. Understanding the content of learning by deep learning networks has become an important technological hurdle today. Understanding our inability to explain exactly how “child” AI works is key to understanding why we have had to wait for the appearance of deep learning.

By the way, the Dwango Artificial Intelligence Laboratory has recently helped with the General AI Challenge (https://www.general-ai-challenge.org/), which is promoting GoodAI (https://www.goodai.com/).  

For a while now, AI has often demonstrated more than human ability about adult intelligence. With the achievement of children’s intelligence through the recent success of deep learning, the basic two elements that make up artificial intelligence became available. Consequently, one major problem to achieving a level of intelligence comparable to that of humans is how to connect these two types of AI. Being at this current stage of development is likely one of the reasons that the field has started gravitating toward development of general or human-level AI.

The role of the Dwango Artificial Intelligence Laboratory of Dwango Ltd. is to act as a flagship to attract talented individuals who are genuinely interested in AI and to promote collaboration with the academic and scientific communities. The Dwango Artificial Intelligence Laboratory is an internet company with several properties, and its main business is a video-sharing platform called Nico Nico Douga, where users can comment on videos. When we are involved in industry, we use machine learning for the purpose of developing tools and services that can improve the lives of people in society and our communities.

At this stage, AGI is a technical goal and is not directly producing economic benefits. However, as I mentioned before, more organizations have recently begun to develop this type of AI, so we can assume that these types of companies will naturally increase. This is a big difference from 2013, when few companies were seriously paying attention to AGI.

We have already been able to develop intelligent agents that employ reinforcement or deep learning as an approach to solving a number of problems and are quickly approaching the level of productization. What is now technically important is the architecture by which we merge or assemble these approaches.   With the development of machine learning including deep learning, if you can pick the right framework for the information you’re dealing with, it’s quite feasible to extract inforamtion from the relevant data. As a next step, the architecture for combining and using knowledge acquired from data by multiple machine learning modules is becoming more important.

From now on, if we focus on research and development with this architecture in mind, we will be able to reach the next step in creating a system with general intelligence and with it the next generation of practical uses will become evident.

FLI: What is the advantage of the Whole Brain Architecture approach, and how does it differ from other avenues of AI research?

HY: The whole brain architecture is an engineering-based research approach “To create a human-like artificial general intelligence (AGI) by learning from the architecture of the entire brain.”  This definition was finalized by discussions between 2014 and 2015 by researchers involved in founding the Whole Brain Architecture Initiative (WBAI, http://wba-initiative.org/en/) (to be described later).  In short, the goal is brain-inspired AI, which is essentially AGI. Basically, this approach to building AGI is the integration of artificial neural networks and machine-learning modules while using the brain’s hard wiring as a reference. However, even though we are using the entire brain as a building reference, our goal is not to completely understand the intricacies of the brain. In this sense, we are not looking to perfectly emulate the structure of the brain but to continue development with it as a coarse reference.

Let’s take a moment to understand the merits of achieving AGI through this approach and the merits of quick development.

We can divide the merits of this unique approach into two categories. The first is that since we are creating AI that resembles the human brain, we can develop AGI with an affinity for humans. Simply put, I think it will be easier to create an AI with the same behavior and sense of values as humans this way. Even if superintelligence exceeds human intelligence in the near future, it will be comparatively easy to communicate with AI designed to think like a human, and this will be useful as machines and humans continue to live and interact with each other. As an example, artificial intelligence with childcare as a task needs to understand the feelings of the child. It seems that this is equivalent to the value alignment discussed by FLI and others.

The second merit of this unique approach is that if we successfully control this whole brain architecture, our completed AGI will arise as an entity to be shared with all of humanity. In short, in conjunction with the development of neuroscience, we will increasingly be able to see the entire structure of the brain and build a corresponding software platform. Developers will then be able to collaboratively contribute to this platform. When looking at the brains of humans and rodents, it’s becoming apparent that mesoscopic level connectome can be associated with design diagrams representing connections between hundreds of machine learning modules.  Moreover, with collaborative development, it will likely be difficult for this to become “someone’s” thing or project. In order to create an environment in which people can openly help create something for the benefit of society, we need an organization like WBAI, which I will talk about later.

As a supplement to the unique approach described above, since the brain is a unique type of generally intelligent entity, AGI will gradually improve to the point that it reaches the level of the human brain. As such, the approach to its construction will not only connect the general knowledge of people but should also help increase participation. If we look at the background of this development, there have been many attempts at creating an artificially intelligent architecture aimed toward AGI, but combining the efforts of numerous researchers has become increasingly difficult. Considering the lack of a properly developed background for collaboration, participating in the development of a brain architecture will likely help us obtain agreement and understanding among researchers.

The AGI that can be achieved with this Whole Brain Architecture approach could potentially be the first of its kind in the eyes of humanity. Next, I would like to mention the merit of development efficiency and will talk about four related merits.

The first stems from the same merit of our unique approach, which is the notion that efficiency can be achieved through the collaborative efforts of many.

Secondly, the difficulty in creating a generally intelligent system arises from having to establish goals or objectives for that system. In general software planning, a specific functional goal is typically set, and a series of individual components are put together in a package to solve the problem. On the other hand, general intelligence is a function of many combined, interconnected features produced by learning, so we cannot manually break down these features into individual parts. Because of this difficulty, one meaningful characteristic of whole brain architecture is that though based on brain architecture, it is designed to be a functional assembly of parts that can still be broken down and used.

The third merit is deeply related to that mentioned above. When we reach a position in which AGI is near complete, whole brain architecture may be necessary. But before completing the individual parts, we can design an entire image of the framework by using the brain as a reference. For example, when motor vehicles first made their appearance in the world, Mercedes-Benz’s Patent-Motorwagen was developed based on the three-wheeled vehicles from 1886. Much like this, the plan to use our architecture as a framework in order to combine machine-learning techniques can be thought of as a way to speed up the process toward a finished product.

The fourth merit of development efficiency is related to the fact that the functional parts of the brain are to some degree already present in artificial neural networks. It follows that we can build a roadmap of AGI based on these technologies as pieces and parts. It is now said that convolutional neural networks have essentially outperformed the system/interaction between the temporal lobe and visual cortex in terms of image recognition tasks.

At the same time, deep learning has been used to achieve very accurate voice recognition. In humans, the neocortex contains about 14 billion neurons, but about half of those functions can be partially explained with deep learning. Also, in the brain, the part responsible for the calculations of delayed feedback required for neural reinforcement learning are tied to the basal ganglia. Furthermore, progress is being made in the modeling of the cerebellum as a perceptron. From this point on, we need to come closer to simulating the functions of different structures of the brain, and even without the whole brain architecture (WBA), we need to be able to assemble several structures together to reproduce some behavioral level functions. Then, I believe, we’ll have a path to expand that development process to cover the rest of the brain functions, and finally integrate as whole brain. For argument’s sake, let’s say we have already completed one-third of this process. If we can solve one problem after another, you can easily imagine that we will eventually reach 100%.

Currently, it is thought that organizations with an approach similar to WBA include the UK’s DeepMind and America’s Sandia National Lab. Considering the recent increase in the number of organizations that are targeting machine learning and the support of the WBA approach in neuroscience, we can expect a continued increase in these kinds of research organizations in the near future.

FLI: In addition to your work at the Dwango Artificial Intelligence Laboratory, you also started a non-profit, the Whole Brain Architecture Initiative. How does the non-profit’s role differ from the commercial work?

HY: The Whole Brain Architecture Initiative serves as an organization that helps promote whole brain AI architecture R&D as a whole. In this respect, the Dwango Artificial Intelligence laboratory functions not only to support the WBAI, but as a developer of WBA.

First off, I would like to talk a little bit about why we are only focusing on promoting the development of the whole brain architecture (WBA). As I mentioned before, if we can manage to control the development of WBA successfully, I think we can ensure that AGI will be something that will benefit the general public. On the other hand, if the WBAI is privately used to realize WBA, we may deviate from this goal.  With this in mind, we want to work towards the ideal of including others in the development of WBA.  To support this, we have set up our own vision, mission, and values [see below].

The Basic Ideas of the WBAI

  • Our vision is to create a world in which AI exists in harmony with humanity.
  • Our mission is to promote the open development of whole brain architecture.
    • In order to make human-friendly artificial general intelligence a public good for all of mankind, we seek to continually expand open, collaborative efforts to develop AI based on an architecture modeled after the brain.
  • Our values are Study, Imagine and Build.
    • Study: Deepen and spread our expertise.
    • Imagine: Broaden our views through public dialogue.
    • Build: Create AGI through open collaboration.

To forward the progress of research, we are creating development environments such as Life in Silico (LIS), which has been used for AI learning and testing in the past, platforms like BriCA that can facilitate the integration of machine learning approaches, development of criteria for the development of AGI, research methods to evaluate AGI,  and efforts like the Whole Brain Connectomic Architecture (a cognitive architecture referring to the connectome), which is grounded in neuroscience. Also, in order to form and expand an open engineering community acting in such a development environment, we have built a community called Sig-WBA on Slack, and we conduct face-to-face meetings every other week as well as occasional mini-hackathons.

We are holding the Whole Brain Architecture Seminar every other month. At this meeting, in order to unify different expertise like machine learning, neuroscience, and cognitive architecture, we are continuing to invite researchers from other fields to give talks or engage in panel debates.  

On one hand, the Dwango Artificial Intelligence laboratory is a kind of “creating AI together” organization and is actually developing technology leading to WBA and AGI approaches. In particular, we are conducting open research and development as part of the WBAI platform in an effort to become this type of organization.  

Intuitive physics research is one activity of  the Dwango Artificial Intelligence Laboratory.  This is a study aimed at realizing the process of developing a naive understanding of the physical world,  like a child develops from the moment he or she is born.  If these are not created in the same way that a child acquires knowledge, AI will not grow in the same way as a human. This line of research was started by Josh Tenenbaum’s group at MIT.  From that point of view, our WBA approach is trying to develop AGI in the same way.

Additionally, one domestic conference related to AGI is the Special Interest Group for AGI of the Japanese Artificial Intelligence Society (http://www.sig-agi.org/sig-agi), and we are working to publish preliminary research at similar venues.

FLI: We noticed the Dwango Artificial Intelligence Laboratory website talks about existential risk, using Easter Island as an analogy. This is also an important issue for FLI, and we focus on a few different areas (AI, Biotech, Nuclear Weapons, and Climate Change). What do you think poses the greatest existential risk to global society in the 21st century?

HY: To my knowledge this notion of “Existential Risk” you speak of signifies some manner of crisis which threatens the survival of the human race. If you translate “Existential Risk” into Japanese directly (as “survival risk”) it may be difficult for Japanese people to understand your meaning as these words have different nuances and mean different things for Japanese people. The risk is not just limited to AI; basically, as human scientific and technological abilities expand, and we become more empowered, risks will increase, too.

That being said, not only AI, but science and technology more broadly, will continue to expand, empowering mankind and expanding human capabilities in many ways. In the past, we had only weak technologies or, more precisely, technologies with weak offensive power.  In other words, imagine a large field where everyone only has weapons as dangerous as bamboo spears.  The risk that human beings would go extinct by killing each other is extremely small.  On the other hand, as technologies develop, we have bombs in a very small room and no matter who detonates the bomb, we approach a state of annihilation. That risk should concern everyone.

Imagine, if there are only 10 people in the room, they will mutually monitor and trust each other. However, imagine trusting 10 billion people each with the ability to destroy everyone — such a scenario is beyond our ability to comprehend. Of course, technological development will advance not only offensive power but also defensive power, but it is not easy to have defensive power to contain attacking power at the same time. If scientific and technological development are promoted using artificial intelligence technology, for example, many countries will easily hold intercontinental ballistic fleets, and artificial intelligence can be extremely dangerous to living organisms by using nanotechnology. It could comprise a scenario to extinguish mankind by the development or use of dangerous substances.  Generally speaking, new offensive weapons are developed utilizing the progress of technology, and defensive weapons are developed to neutralize them. Therefore, it is inevitable that periods will exist where the offensive power needed to destroy humanity exceeds its defensive power.

If the time ever comes when the offensive power to destroy humanity becomes democratized, as in the “small room” example I gave previously, the overall existential risk will increase. Also, if advanced AIs become decision-making entities themselves, the risk will increase further. After all, the biggest problem would be increasing the numbers of decision makers who have capabilities to bring extinction to the human race. The world is a much smaller place due to technology. Originally the spatial extent was the greatest defense power, but the problem is that the space is substantially narrowed by technological progress.  This problem corresponds to Dr. Hawking’s warning that the human race is entering a dangerous period, which will precede its advance into space.

Unfortunately, I cannot estimate all of the various risks as they exist, such as climate change, environmental change, pandemics, etc. However, with respect to these risks, there is a possibility that control can be achieved by leveraging AI. In any case, assuming a scenario in which AI advances rapidly in the future, the risk of extinction due to the increase in the number of decision makers (conscious agents) with the ability to extinguish the human race is quite critical. Unfortunately, a good strategy to ensure complete safety remains elusive at this point, and it’s important that we talk about it.

FLI: What do you think is the greatest benefit that AGI (Artificial General Intelligence) can bring society?

HY: AGI ‘s greatest benefit comes from acceleration of development for science and technology.  More sophisticated technology will offer solutions for global problems such as environmental issues, food problems and space colonization.

As mentioned earlier, if the habitat of mankind is much wider than the range of attack power it has, there is a possibility that existential risk can be greatly suppressed. For that reason, I think that emphasis should be placed on using AI to advance space exploration.

Here I would like to share my vision for the future.

EcSIA: Desirable future coexisting with AI

In a desirable future, the happiness of all humans will be balanced against the survival of humankind under the purview of a superintelligence. In that future, society will be an ecosystem formed by augmented human beings and various public AIs, in what I term an ecosystem of shared intelligent agents (EcSIA).

Although no human can completely understand EcSIA—it is too complex and vast—humans can control its basic directions. In implementing such a control, the grace and wealth that EcSIA affords needs to be properly distributed to everyone.

(Hiroshi Yamakawa, July 2015)

FLI: What do you think is the greatest risk of AGI?

HY: If you are talking about maximum risk, then it would be an existential risk that will result from an increase in decision-makers with human-destructive capabilities, as I mentioned.

FLI: Assuming no global catastrophe halts progress, what are the odds of human level AGI in the next 10 years? By 2040? By 2100?

HY: The Whole Brain Architecture Initiative (WBAI) has an official goal to achieve AGI by around 2030. I think it is well-known that Ray Kurzweil has predicted that we will reach human-level AGI by 2029, and I’m inclined to agree with him. Since we are an organization that promotes the development of AGI, we are not stating that its development will be possible in 100 years, well past the average estimate.

I responded to the survey on AI timeline estimates conducted by Nick Bostrom and others at Future of Humanity Institute (FHI) around 2011-12, and at that time, I estimated 2023. Personally, I think there’s a possibility that it can happen soon, but taking the average of the estimates of people involved in WBAI, we came up with 2030.

I think this is one point that FLI’s Asilomar Conference wasn’t able to reach consensus on. Because the media has a tendency to whip up controversy about everything, even if we can’t reach an expert consensus on an issue, if we can still communicate the point of view of expert technologists to the general public, I think that has a lot of value. Given this perspective, in my current role as the editorial chairman for the Japanese Society of Artificial Intelligence (JSAI) journal, I’m promoting a plan to have a series of discussions starting in the July edition on the theme of “Singularity and AI,” in which we’ll have AI specialists discuss the singularity from a technical viewpoint. I want to help spread calm, technical views on the issue in this way, starting in Japan.

FLI: Once human level AGI is achieved, how long would you expect it to take for it to self-modify its way up to massive superhuman intelligence?

HY: To return to the earlier discussion about computer Shogi, in most cases one or a few programmers were able to create Shogi bots through steady incremental development. Even in those cases [where there was no breakthrough], once the AI reached the level of top human players, it only took a few years to reach superhuman levels.

At the moment AI’s versatility reaches a certain economic value level, investment in it will increase rapidly.  If so, the ability of AGI may exceed human level at a rate that is not comparable to that of game AI.  Furthermore, if human-level AGI is achieved, it could take on the role of an AI researcher itself. Therefore, immediately after the AGI is built, it could start rapidly cultivating great numbers of AI researcher AI’s that work 24/7, and AI R&D would be drastically accelerated.

By the way, Ray Kurzweil has predicted that by 2029, AI will achieve intelligence equivalent to that of one person, and that 16 years later, around 2045, it will achieve intelligence equal to that of all of humanity combined. For the reasons described above, I have a drastically shorter timeline for the second half of that prediction , and I think it might be a few days or a few months. From my perspective, 16 years seems too long until reaching intelligence equal to all of humanity, without some kind of technological bottleneck that slows things down during that period, such as needing massive amounts of electricity to run many AGIs.

FLI: What probability do you assign to the possibility of negative/extremely negative consequences as a result of badly done AI design or operation?

HY: When you consider even small impacts, it is practically 100%. If you include the risk of something like some company losing a lot of money, that will definitely happen. When thinking about risk, you have to consider the size of impacts and, at the same time, the frequency and likelihood of the risk. Of course, the impact of existential risk would be extremely large. Similar to this, the influence of international war and loss due to misuse would likely have the largest impacts.

As a more current issue, the automation of abilities related to productivity also exists.   The range of things that can be done with AI is becoming wider, and those who know how to use it will become dramatically more productive, whereas those who cannot will see a drastic decrease in the value of their productivity. In other words, the disparity will widen between those who profit from it and those who do not. This disparity will grow both on a personal level and an organizational level. When that happens, the bad economic situation will give rise to dissatisfaction with the system, and that could create a breeding ground for war and strife. This could be perceived as the evils brought about by capitalism. At any rate, it’s important that we try to curtail the causes of instability as much as possible.

One potential countermeasure could be the introduction of a system like basic income. In Japan, for example, it would mean something like distributing 70-80,000 yen ($700-800) to every person, every month. Of course, we’re not just talking about Japan here, but somehow some measures need to be taken that would achieve economic balance. Unless capitalism quickly changes, this will become a pressing issue, and I think we will need some kind of system to suppress that risk.

As for the probability of an extremely bad outcome, like existential risk, I have tried putting some thought into how to calculate it. If you consider a number of decision-making bodies with capabilities that can wipe out humanity, then based on that you can calculate the “half-life” of how many years humanity can survive. For example, if you have a model of 10 million people, it depends on what percentage of them you think would want to push the big red button. You can do a straightforward calculation that way. But if you start to take risk-inhibiting factors into account, there are a lot of things you could consider, like how many people would try to stop the big red button from being pushed, and it could get very complicated to try to calculate a probability. It’s possible to try, but I am not sure how valuable it would be.

FLI: Is it too soon for us to be researching AI Safety or how to align AI with human values? Andrew Ng, Chief Scientist at Baidu Research, argues it’s like worrying about “overpopulation on Mars” — if we don’t know what the architecture of an AGI will look like, it’s hard to design the safety mechanisms. On the other hand, Stuart Russell, Prof. of Computer Science at UC Berkeley, argues that “it’s as if we were spending billions moving humanity to Mars with no plan for what to breathe.” Do you think it would be productive to start working on AI Safety now?

HY: I do not think it is at all too early to act for safety, and I think we should progress forward quickly. Technological development is accelerating at a fast pace as predicted by Kurzweil. Though we may be in the midst of this exponential development, since the insight of humans is relatively linear, we may still not be close to the correct answer. In situations where humans are exposed to a number of fears or risks, something referred to as “normalcy bias” in psychology typically kicks in. People essentially think, “Since things have been OK up to now, they will probably continue to be OK.” Though this is often correct, in this case, we should subtract this bias.

If possible, we should have several methods to be able to calculate the existential risk brought about by AGI. First, we should take a look at the Fermi Paradox. This is a type of estimation process that proposes that we can estimate the time at which intelligent life will become extinct based on the fact that we have not yet met with alien life and on the probability that alien life exists. However, using this type of estimation would result in a rather gloomy conclusion, so it doesn’t really serve as a good guide as to what we should do. As I mentioned before, it probably makes sense for us to think of things from the perspective of increasing decision making bodies that have increasing power to bring about the destruction of humanity.

FLI: Is there anything you think that the AI research community should be more aware of, more open about, or taking more action on within AGI research and how it might impact society? Similarly, what would be your message to business leaders or policymakers about how society should prepare itself for changes brought about by AGI?

HY: There are basically a number of actions that are obviously necessary. Based on this notion, we have established a number of measures like the Japanese Society for Artificial Intelligence Ethics in May 2015 (http://ai-elsi.org/ ), and subsequent Ethical Guidelines for AI researchers (http://ai-elsi.org/archives/514).

The head of this Ethics Society is Matsuo Yutaka, an associate professor from Tokyo University who also participated in the 2017 Beneficial AI conference in Asilomar and is a vice chairperson of the WBAI. A majority of the content of these ethical guidelines expresses the standpoint that researchers should move forward with research that contributes to humanity and society. Additionally, one special characteristic of these guidelines is that the ninth principle listed, a call for ethical compliance of AI itself, states that AI in the future should also abide by the same ethical principles as AI researchers. From 2015 in Japan, the Ministry of Economy, Trade and Industry, Ministry of Education, Culture, Sports, Science and Technology, and Ministry of Internal Affairs and Communications have all come up with positions on the development of AI. With the increase of this development into 2016, the percentage of time dedicated to considering advancements in AI has begun to increase.

Japan has traditionally searched for new solutions while following other countries, and now is correctly using the movements of other countries as a reference. We have now come to a phase of thinking about what we can do by ourselves.

As an example, from March 14 to 15 this year, in Tokyo University, which is close to our lab, the Ministry of Internal Affairs and Communications held a two-day international symposium on “International Forum toward AI Network Society.” (http://www.soumu.go.jp/menu_news/s-news/01iicp01_02000056.html) The symposium was filled to capacity with about 200 participants. The purpose of the forum was to promote international discussion regarding the formulation of guidelines for the development of AI. Participants from abroad included members from groups like G7 and OECD, and individuals like Greg Corrado from the Partnership on AI, Edward Felten from the White House Office of Science and Technology Policy, Jaan Tallinn from the Future of Life Institute, and Robert Bley from the European Committee on Judicial Affairs, who all gave impressive talks.

Professor Hori Koichi, Technical Advisor for the Subcommittee on AI R&D Principles of the Conference, also took the stage on the second day to give a talk during the panel on the “Risks of Artificial Intelligence.” One example of the discussion during the panel was that regardless of whether you are referring to a subordinate or artificial intelligence, autonomy is useful, though at the same time risky. One important point that was brought up was that it is important to appropriately control the limits of such autonomy to limit risk. As a suggestion for controlling AI that has been implemented as part of the internet cloud, I mentioned the use of a network kill-switch as one means of generally controlling or limiting autonomy.

Countries other than those mentioned above are also actively taking a more grassroots approach. Academically within the artificial intelligence community, discussion at events like the Symposium on Artificial General Intelligence (http://www.sig-agi.org/sig-agi) is influencing the progress of technology and development in society.

 Also, the co-representative for WBAI, Koichi Takahashi, is involved in a number of events such as the AI Society Meeting (http://aisocietymeeting.wixsite.com/ethics-of-ai) and AIR: Acceptable Intelligence with Responsibility (http://sig-air.org/). These events have the purpose of defining new relationships with AI through discussion on aspects like ethics, economics, law, sociology, and philosophy. In another grassroots venue for discussing the singularity called the Singularity Salon (http://singularity.jp/), participants share general information about the influence of artificial intelligence that has exceeded human intelligence.

In addition to the movements mentioned above, the appearance of AI in the news has begun to occur every day. With this appearance, and the 3rd AI boom starting around 2014, the way the world looks at AI is at the same time significantly changing.

If we look back at society’s way of reacting to AI, we see questions in some media outlets like “Will AI take our jobs?” Specialists usually replied with a conservative response like “Almost no cases exist where AI is taking our jobs.” The “Race against the Machine” has been literally translated into Japanese. However, in 2015, the premise that AI could start to displace jobs started to become accepted. This competition was no longer looked at as taboo, and an atmosphere of open discussion regarding the topic began to arise. Then around 2016, people started to discuss the main topic of what to do if AI were to be built, using examples like the trolley problem, a well known thought experiment, as it applies to self-driving cars. Currently in 2017, the discussion toward understanding the transparency of and ability to control AI is progressing.

FLI: Many observers have noted that Japan, as a society, seems more welcoming of automation, and that robots have generally been portrayed more positively in Japanese culture than in Western culture. Do you think the Japanese view of AI is different than that in the West, and if so, how? There is currently a vigorous public debate in the English-speaking world about superintelligence, the technological singularity, and AI safety. Is that debate also happening in Japan?

HY: If we look at things from the standpoint of a moral society, we are all human, and without even looking from the viewpoints of one country or another, in general we should start with the mentality that we have more common characteristics than different.

When looking at AI from the traditional background of Japan, there is a strong influence from beliefs that spirits or “kami” are dwelling in all things. In other words, the boundary between living things and humans is relatively unclear, and along the same lines, the same boundaries for AI and robots are unclear. For this reason, in the past, robotic characters like “Tetsuwan Atom” (Astro Boy) and Doraemon were depicted as living and existing in the same world as humans, a theme that has been pervasive in Japanese anime for a long time. By the way, there are a variety of restaurants with international dishes in Tokyo, which also may be another sign of the acceptance of diversity in Japanese culture.

Actually, I think the integration of diversity is a valid means of decreasing existential risk. Science and technology are increasing at an exponential rate, accompanied by an accelerated rate of societal change. A means for overcoming this change is diversity. I touched on this when I mentioned the EcSIA previously, but from here on out, we will see humans and AI not as separate entities. Rather I think we will see the appearance of new combinations of AI and humans. Becoming more diverse in this way will certainly improve our chances of survival. On the other hand, improving chances of survival through diversity comes with a significant sacrifice. In other words, regardless of the choices we make to improve our rates of survival, we won’t know their success beforehand. For this reason, diversity is meaningful. If we can deal with the challenges that come with this kind of sacrifice and successfully surround ourselves with AI, it will probably become possible to reduce human suffering and improve our lifespans.

When we push further the discussion on diversity of things that should exist, we will arrive at the question “What should we leave for the future universe?”   As a very personal view, I think that “surviving intelligence” is something that should be preserved in the future because I feel that it is very fortunate that we have established an intelligent society now, beyond the stormy sea of evolution.  If so, intelligence itself looks worthy of preservation.  Imagine a future in which our humanity is living with intelligent extraterrestrials after first contact. We will start caring about the survival of humanity but also intelligent extraterrestrials.  If that happens, one future scenario is that our dominant values will be extended to the survival of intelligence rather than the survival of the human race itself.

If we for a moment take the purpose of humanity to be “to propagate intelligence through space,” the “existential risk” can be dramatically reduced. It can be thought that the possible forms of intelligence that can be propagated through space are not only humans, but artificial intelligence, genetically modified beings, and bacteria or viruses  that become intelligent species. As an example, an organism called a water bear can essentially hibernate even in extreme environments like space. Hibernating in a similar way might allow us to store genetic information for tens of thousands of years and spread intelligent life throughout space. If this happens, a singularity will occur on other planets once every ten thousand years or so. Even if the probability that each civilization will go through a singularity is low, it should eventually be successful somewhere among the many planets. If humans can be the starters of this spectacular project in the universe, the achievements of humanity will be forever engraved into the history of the universe. If you bet on this story, we should leave some record that can be read in future creatures, for our honor.

This literally means that we will be abandoning humanity, but humanity itself will be propagated through space like the hibernating water bear or will be uploaded into artificial intelligence. Increasing our options for propagation in this way should also consequently increase our chances for survival. In any event, we will continue to increase the number of technological choices at our disposal, and by determining what we want to continue on, a sort of value judgement, we should be able to limit the primary factors related to existential risk. On one hand, the appearance of intelligence that surpasses humans will likely invalidate the mode of thinking that “humans are bestowed with some special significance due to their high intelligence,” which will have a significant impact on humanity’s sense of value. In contrast, when looking at the situation, we can treat it as a chance to figure out what we need to convey to future generations. Richard Dawkins thought that genes are the vehicles of living beings, but at the same time, we will likely outgrow the thought that the brain is the vehicle for intelligence. With the Japanese tradition of assimilating other cultural aspects in Japan, I think that with the abovementioned choices, we will have an acceptable foundation for propagation.

FLI: Japan used to be synonymous with developing cutting-edge technology: CD’s, VHS, DVD’s, Bullet Trains, LCD’s, digital cameras… and the list goes on. But as emerging economies like China and South Korea have grown, Japan’s economy has stagnated, resulting in a smaller share of the global economic pie. However, Japan still has some of the world’s best universities, researchers, and engineers. What can Japan do to regain its role as the leading powerhouse of technological innovation?

HY: In the whole brain architecture, we are geared toward developing an AI that can harmonize with humans. However, as human beings, people in a country are more likely to think that people in that country are more important than humanity as a whole. This is the similar to the tendency for human beings to think that human beings are more important than, say, intelligent water bears.

While it is already quite obvious, Japanese people in business and government roles are racking their brains over what kind of AI to develop and how to apply it in society. For Japan, I don’t think it is easy to simply decide on a strategy, but let us state a few of them.

First of all, the AI at the very least should not become disadvantageous. Actually, recent machine and deep learning are primarily mathematics based, so they are not restricted to a particular language. Moreover, when reading recent academic papers, we can almost completely read them using machine translation. Unfortunately, in the early 2000s, Japan was previously disadvantaged in this sense. Another unfortunate example is that of Mixi, a Japan-based social network that essentially lost to the global market. Now everyone uses Facebook. In AI research that stems from machine learning, Japan’s researchers that have traditionally been strong in mathematics and theoretical physics can use this as an opportunity to demonstrate their strengths.

On a related note, something that is often said is that rather than Japan’s IT, it has strength and finesse in craftsmanship. In a period where AI now has the ability to create, the transference of skills from Japan’s engineers and craftsmanship will be a significant strength. This is an approach similar to Germany’s “Industry 4.0.” For example, consider the aforementioned Yutaka Matsuo’s advocacy for “Food x AI.”  Japan is importing a wide variety of cuisines from around the globe and expanding its taste buds with a number of tasty dishes for a reason, and Tokyo, the capital, stands out in particular in this sense. In other areas like agriculture that support this tendency, regions that embody this Japanese sense have certain expectations about the implementation of AI.

Next, though it’s paradoxical, Japan is said to be an advanced country. Especially concerning the problem of being an aging society, Japan is at the forefront. Because of this, research and the development of robotics for the comfort and care of aging people, personalized preventative care, and health monitoring devices will likely progress in the market. At the same time, the pace at which robots can begin to replace individuals in the labor market will increase. As a starting point, I think that AI-related technologies will increase in these areas.

Finally, in the world from here on out, we will see the development and progress of general, autonomous superhuman intelligence. In response, humanity will likely create mechanisms to control this technology at the international level. Of course, even if people are developing AI separately, the development guidelines, development itself, and standards for use will take shape. Moreover, AI that is designed to monitor the function and presence of another AI will likely appear. We predict that international organizations like the UN as well as national governments and enterprises will control AI in its entirety.  In this flow of things, for Japan to become a country that can function as a standing advisor on the management of AI, by simultaneously advancing technical power and ethics, I think it will be important for Japan to be thought of as a trustworthy country.  

In the event that a genius somehow completes AGI overnight, I hope that he or she will want to consult with Japan. I’d like it to be the Whole Brain Architecture Initiative that can support such activities in the near future.

[end of recorded material]

This interview was prepared by Eric Gastfriend, Jason Orlosky, Mamiko Matsumoto, Benjamin Peterson, and Kazue Evans. Original interview date: April 5, 2017

This content was first published at futureoflife.org on October 12, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram