Skip to content
All Podcast Episodes

Podcast: Top AI Breakthroughs and Challenges of 2017 with Richard Mallah and Chelsea Finn

Published
31 January, 2018

AlphaZero, progress in meta-learning, the role of AI in fake news, the difficulty of developing fair machine learning -- 2017 was another year of big breakthroughs and big challenges for AI researchers!

To discuss this more, we invited FLI's Richard Mallah and Chelsea Finn from UC Berkeley to join Ariel for this month's podcast. They talked about some of the technical progress they were most excited to see and what they're looking forward to in the coming year.

This podcast was edited by Tucker Davey.

Transcript

Ariel: I'm Ariel Conn with the Future of Life Institute. In 2017, we saw an increase in investments into artificial intelligence. More students are applying for AI programs, and more AI labs are cropping up around the world. With 2017 now solidly behind us, we wanted to take a look back at the year and go over some of the biggest AI breakthroughs. To do so, I have Richard Mallah and Chelsea Finn with me today.

Richard is the director of AI projects with us at the Future of Life Institute, where he does meta-research, analysis and advocacy to keep AI safe and beneficial. Richard has almost two decades of AI experience in industry and is currently also head of AI R & D at the recruiting automation firm, Avrio AI. He's also co-founder and chief data science officer at the content marketing planning firm, MarketMuse.

Chelsea is a PhD candidate in computer science at UC Berkeley and she's interested in how learning algorithms can enable robots to acquire common sense, allowing them to learn a variety of complex sensory motor skills in real-world settings. She completed her bachelor’s degree at MIT and has also spent time at Google Brain.

Richard and Chelsea, thank you so much for being here.

Chelsea: Happy to be here.

Richard: As am I.

Ariel: Now normally I spend time putting together questions for the guests, but today Richard and Chelsea chose the topics. Many of the breakthroughs they're excited about were more about behind-the-scenes technical advances that may not have been quite as exciting for the general media. However, there was one exception to that, and that's AlphaZero.

AlphaZero, which was DeepMind's follow-up to AlphaGo, made a big splash with the popular press in December when it achieved superhuman skills at Chess, Shogi and Go without any help from humans. So Richard and Chelsea, I'm hoping you can tell us more about what AlphaZero is, how it works and why it's a big deal. Chelsea, why don't we start with you?

Chelsea: Yeah, so DeepMind first started with developing AlphaGo a few years ago, and AlphaGo started its learning by watching human experts play, watching how human experts play moves, how they analyze the board -- and then once it analyzed and once it started with human experts, it then started learning on its own.

What's exciting about AlphaZero is that the system started entirely on its own without any human knowledge. It started just by what's called “self-play,” where the agent, where the artificial player is essentially just playing against itself from the very beginning and learning completely on its own.

And I think that one of the really exciting things about this research and this result was that AlphaZero was able to outperform the original AlphaGo program, and in particular was able to outperform it by removing the human expertise, by removing the human input. And so I think that this suggests that maybe if we could move towards removing the human biases and removing the human input and move more towards what's called unsupervised learning, where these systems are learning completely on their own, then we might be able to build better and more capable artificial intelligence systems.

Ariel: And Richard, is there anything you wanted to add?

Richard: So, what was particularly exciting about AlphaZero is that it's able to do this by essentially a technique very similar to what Paul Christiano of AI Safety fame has called “capability amplification.” It's similar in that it's learning a function to predict a prior or an expectation over which moves are likely at a given point, as well as function to predict which player will win. And it's able to do these in an iterative manner. It's able to apply what’s called an “amplification scheme” in the more general sense. In this case it was Monte Carlo tree search, but in the more general case it could be other more appropriate amplification schemes for taking a simple function and iterating it many times to make it stronger, to essentially have a leading function that is then summarized.

Ariel: So I do have a quick follow up question here. With AlphaZero, it's a program that's living within a world that has very strict rules. What is the next step towards moving outside of that world with very strict rules and into the much messier real world?

Chelsea: That's a really good point. The catch with these results, with these types of games -- and even video games, which are a little bit messier than the strict rules of a board game -- these games, all of these games can be perfectly simulated. You can perfectly simulate what will happen when you make a certain move or when you take a certain action, either in a video game or in the game of Go or the game of Chess, et cetera. Then therefore, you can train these systems with many, many lifetimes of data.

The real physical world on the other hand, we can't simulate. We don't know how to simulate the complex physics of the real world. As a result, you're limited by the number of robots that you have if you're interested in robots, or if you're interested in healthcare, you're limited by the number of patients that you have. And you're also limited by safety concerns, the cost of failure, et cetera.

I think that we still have a long way to go towards taking these sorts of advances into real world settings where there's a lot of noise, there's a lot of complexity in the environment, and I think that these results are inspiring, and we can take some of the ideas from these approaches and apply them to these sorts of systems, but we need to keep in mind that there are a lot of challenges ahead of us.

Richard: So between real world systems and something like the game of Go, there are also incremental improvements, like introducing this port for partial observability or more stochastic environments, or more continuous environments as opposed to the very discrete ones. So these challenges, assuming that we do have a situation where we could actually simulate what we would like to see or use a simulation to help to get training data on the fly, then in those cases, we're likely to be able to make some progress. Using a technique like this with some extensions or with some modifications to support those criteria.

Ariel: Okay. Now, I'm not sure if this is a natural jump to the next topic or not, but you've both mentioned that one of the big things that you saw happening last year were new creative approaches to unsupervised learning, and Richard in an email to me you mentioned “word translation without parallel data.” So I was hoping you could talk a little bit more about what these new creative approaches are and what you're excited about there.

Richard: So this year, we saw an application of taking vector spaces, or taking word embeddings, which are essentially these multidimensional spaces where there are relationships between points that are meaningful semantically. The space itself is learned by a relatively shallow deep-learning network, but this meaningfulness that is imbued in the space, is actually able to be used, we've seen this year, by taking different languages, or I should say vector spaces that were trained in different languages or created from corpora of different languages and compared, and via some techniques to sort of compare and rationalize the differences between those spaces, we're actually able to translate words and translate things between language pairs in ways that actually, in some cases, exceed supervised approaches because typically there are parallel sets of documents that have the same meaning in different languages. But in this case, we're able to essentially do something very similar to what the Star Trek universal translator does. By consuming enough of the alien language, or the foreign language I should say, it's able to model the relationships between concepts and then realign those with the concepts that are known.

Chelsea, would you like to comment on that?

Chelsea: I don't think I have too much to add. I'm also excited about the translation results and I've also seen similar, I guess, works that are looking at unsupervised learning, not for translation, that have a little bit of a similar vein, but they're fairly technical in terms of the actual approach.

Ariel: Yeah, I'm wondering if either of you want to try to take a stab at explaining how this works without mentioning vector spaces?

Richard: That's difficult because it is a space, I mean it's a very geometric concept, and it's because we're aligning shapes within that space that we actually get the magic happening.

Ariel: So would it be something like you have different languages going in, some sort of document or various documents from different languages going in, and this program just sort of maps them into this space so that it figures out which words are parallel to each other then?

Richard: Well it figures out the relationship between words and based on the shape of relationships in the world, it's able to take those shapes and rotate them into a way that sort of matches up.

Chelsea: Yeah, perhaps it could be helpful to give an example. I think that generally in language you're trying to get across concepts, and there is structure within the language, I mean there's the structure that you learn about in grade school when you're learning vocabulary. You learn about verbs, you learn about nouns, you learn about people and you learn about different words that describe these different things, and different languages have shared this sort of structure in terms of what they're trying to communicate.

And so, what these algorithms do is they are given basically data of people talking in English, or people writing documents in English, and they're also given data in another language -- and the first one doesn't necessarily need to be English. They're given data in one language and data in another language. This data doesn't match up. It's not like one document that's been translated into another, it's just pieces of language, documents, conversations, et cetera, and by using the structure that exists, and the data such as nouns, verbs, animals, people, it can basically figure out how to map from the structure of one language to the structure of another language. It can recognize this similar structure in both languages and then figure out basically a mapping from one to the other.

Ariel: Okay. So I think, I want to keep moving forward, but continuing with the concept of learning, and Chelsea I want to stick with you for a minute. You mentioned that there were some really big metalearning advances that occurred last year, and you also mentioned a workshop and symposium at NIPS. I was wondering if you could talk a little more about that.

Chelsea: Yeah, I think that there's been a lot of excitement around metalearning, or learning to learn. There were two gatherings at NIPS, one symposium, one workshop this year and both were well-attended by a number of people. Actually, metalearning has a fairly long history, and so it's by no means a recent or a new topic, but I think that it has renewed attention within the machine learning community.

And so, I guess I can describe metalearning. It's essentially having systems that learn how to learn. There's a number of different applications for such systems. So one of them is an application that's often referred to as AutoML, or automatic machine learning, where these systems can essentially optimize the hyper parameters, basically figure out the best set of parameters and then run a learning algorithm with those sets of hyper parameters. Essentially kind of taking the job of the machine learning researcher that is tuning different models on different data sets. And this can basically allow people to more easily train models on a data set.

Another application of metalearning that I'm really excited about is enabling systems to reuse data and reuse experience from other tasks when trying to solve new tasks. So in machine learning, there's this paradigm of creating everything from scratch, and as a result, if you're training from scratch, from zero prior knowledge, then it's going to take a lot of data. It's going to take a lot of time to train because you're starting from nothing. But if instead you're starting from previous experience in a different environment or on a different task, and you can basically learn how to efficiently learn from that data, then when you see a new task that you haven't seen before, you should be able to solve it much more efficiently.

And so, one example of this is what's called One-Shot Learning or Few-Shot Learning, where you learn essentially how to learn from a few examples, such that when you see a new setting and you just get one or a few examples, labeled examples, labeled data points, you can figure out the new task and solve the new task just from a small number of examples.

One explicit example of how humans do this is that you can have someone point out a Segway to you on the street, and even if you've never seen a Segway before or never heard of the concept of a Segway, just from that one example of a human pointing out to you, you can then recognize other examples of Segways. And the way that you do that is basically by learning how to recognize objects over the course of your lifetime.

Ariel: And are there examples of programs doing this already? Or we're just making progress towards programs being able to do this more effectively?

Chelsea: There are some examples of programs being able to do this in terms of image recognition. There's been a number of works that have been able to do this with real images. I think that more recently we've started to see systems being applied to robotics, which I think is one of the more exciting applications of this setting because when you're training a robot in the real world, you can't have the robot collect millions of data points or days of experience in order to learn a single task. You need it to share and reuse experiences from other tasks when trying to learn a new task.

So one example of this is that you can have a robot be able to manipulate a new object that it's never seen before based on just one demonstration of how to manipulate that object from a human.

Ariel: Okay, thanks.

I want to move to a topic that is obviously of great interest to FLI and that is technical safety advances that occurred last year. Again in an email to me, you've both mentioned “inverse reward design” and “deep reinforcement learning for human preferences” as two areas related to the safety issue that were advanced last year. I was hoping you could both talk a little bit about what you saw happening last year that gives you hope for developing safer AI and beneficial AI.

Richard: So, as I mentioned, both inverse reward design and deep reinforcement learning from human preferences are exciting papers that came out this year.

So inverse reward design is where the AI system is trying to understand what the original designer or what the original user intends for the system to do. So it actually tries, if it's in some new setting, a test setting where there are some potentially problematic new things that were introduced relative to the training time, then it tries specifically to back those out or to mitigate the effects of those, so that's kind of exciting.

Deep reinforcement learning from human preferences is an algorithm for trying to very efficiently get feedback from humans based on trajectories in the context of reinforcement learning systems. So, these are systems that are trying to learn some way to plan, let's say a path through a game environment or in general trying to learn a policy of what to do in a given scenario. This algorithm, deep RL from human preferences, shows little snippets of potential paths to humans and has them simply choose which are better, very similar to what goes on at an optometrist. Does A look better or does B look better? And just from that, very sophisticated behaviors can be learned from human preferences in a way that was not possible before in terms of scale.

Ariel: Chelsea, is there anything that you wanted to add?

Chelsea: Yeah. So, in general, I guess, going back to AlphaZero and going back to games in general, there's a very clear objective for achieving the goal, which is whether or not you won the game or your score at the game. It's very clear what the objective is and what each system should be optimizing for. AlphaZero should be, like when playing Go should be optimizing for winning the game, and if a system is playing Atari games it should be optimizing for maximizing the score.

But in the real world, when you're training systems, when you're training agents to do things, when you're training an AI to have a conversation with you, when you're training a robot to set the table for you, there is no score function. The real world doesn't just give you a score function, doesn't tell you whether or not you're winning or losing. And I think that this research is exciting and really important because it gives us another mechanism for telling robots, telling these AI systems how to do the tasks that we want them to do.

And for example, the human preferences work, it allows us, in sort of specifying some sort of goal that we want the robot to achieve or kind of giving it a demonstration of what we want the robot to achieve, or some sort of reward function, instead lets us say, “okay, this is not what I want, this is what I want,” throughout the process of learning. And then as a result, at the end you can basically guarantee that if it was able to optimize for your preferences successfully, then you'll end up with behavior that you're happy with.

Ariel: Excellent. So I'm sort of curious, before we started recording, Chelsea, you were telling me a little bit about your own research. Are you doing anything with this type of work? Or is your work a little different?

Chelsea: Yeah. So more recently I've been working on metalearning and so some of the metalearning works that I talked about previously, like learning just from a single demonstration and reusing data, reusing experience that you talked about previously, has been some of the things that I've been focusing on recently in terms of getting robots to be able to do things in the real world, such as manipulating objects, pushing objects around, using a spatula, stuff like that.

I've also done work on reinforcement learning where you essentially give a robot an objective, tell it to try to get the object as close as possible to the goal, and I think that the human preferences work provides a nice alternative to the classic setting, to the classic framework of reinforcement learning, that we could potentially apply to real robotic systems.

Ariel: Chelsea, I'm going to stick with you for one more question. In your list of breakthroughs that you're excited about, one of the things that you mentioned is very near and dear to my heart, and that was better communication, and specifically better communication of the research. And I was hoping you could talk a little bit about some of the websites and methods of communicating that you saw develop and grow last year.

Chelsea: Yes. I think that more and more we're seeing researchers put their work out in blog posts and try to make their work more accessible to the average user by explaining it in terms that are easier to understand, by motivating it in words that are easier for the average person to understand and I think that this is a great way to communicate the research in a clear way to a broader audience.

In addition, I've been quite excited about an effort, I think led by Chris Olah, on building what is called distill.pub. It's a website and a journal, an academic journal, that tries to move away from this paradigm of publishing research on paper, on trees essentially. Because we have such rich digital technology that allows us to communicate in many different ways, it makes sense to move past just completely written forms of research dissemination. And I think that's what distill.pub does, is it allows us, allows researchers to communicate research ideas in the form of animations, in the form of interactive demonstrations on a computer screen, and I think this is a big step forward and has a lot of potential in terms of moving forward the communication of research, the dissemination of research among the research community as well as beyond to people that are less familiar with the technical concepts in the field.

Ariel: That sounds awesome, Chelsea, thank you. And distill.pub is probably pretty straight forward, but we'll still link to it on the post that goes along with this podcast if anyone wants to click straight through.

And Richard, I want to switch back over to you. You mentioned that there was more impressive output from GANs last year, generative adversarial networks.

Richard: Yes.

Ariel: Can you tell us what a generative adversarial network is?

Richard: So a generative adversarial network is an AI system where there are two parts, essentially a generator or creator that comes up with novel artifacts and a critic that tries to determine whether this is a good or legitimate or realistic type of thing that's being generated. So both are learned in parallel as training data is streamed into the system, so in this way, the generator learns relatively efficiently how to create things that are good or realistic.

Ariel: So I was hoping you could talk a little bit about what you saw there that was exciting.

Richard: Sure, so new architectures and new algorithms and simply more horsepower as well have led to more impressive output. Particularly exciting are conditional generative adversarial networks, where there can be structured biases or new types of inputs that one wants to base some output around.

Chelsea: Yeah, I mean, one thing to potentially add is that I think the research on GANs is really exciting and I think that it will not only make advances in generating images of realistic quality, but also generating other types of things, like generating behavior potentially, or generating speech, or generating a language. We haven't seen as much advances in those areas as generating images, thus far the most impressive advances have been in generating images. I think that those are areas to watch out for as well.

One thing to be concerned about in terms of GANs is the ability for people to generate fake images, fake videos of different events happening and putting those fake images and fake videos into the media, because while there might be ways to detect whether or not these images are made-up or are counterfeited essentially, the public might choose to believe something that they see. If you see something, you're very likely to believe it, and this might exacerbate all of the, I guess, fake news issues that we've had recently.

Ariel: Yeah, so that actually brings up something that I did want to get into, and honestly, that, Chelsea, what you just talked about, is some of the scariest stuff I've seen, just because it seems like it has the potential to create sort of a domino effect of triggering all of these other problems just with one fake video. So I’m curious, how do we address something like that? Can we? And are there other issues that you've seen crop in the last year that also have you concerned?

Chelsea: I think there are potentially ways to address the problem in that if media websites, if it seems like it's becoming a real danger in the imminent future, then I think that media websites, including social media websites, should take measures to try to be able to detect fake images and fake videos and either prevent them from being displayed or put a warning that it seems like it was detected as something that was fake, to explicitly try to mitigate the effects.

But, that said, I haven't put that much thought into it. I do think it's something that we should be concerned about, and the potential solution that I mentioned, I think that even if it can help solve some of the problems, I think that we don't have a solution to the problem yet.

Ariel: Okay, thank you. I want to move on to the last question that I have that you both brought up, and that was, last year we saw an increased discussion of fairness in machine learning. And Chelsea, you mentioned there was a NIPS tutorial on this and the keynote mentioned it at NIPS as well. So I was hoping you could talk a bit about what that means, what we saw happen, and how you hope this will play out to better programs in the future.

Chelsea: So, there's been a lot of discussion in how we can build machine-learning systems, build AI systems such that when they make decisions, they are fair and they aren't biased. And all this discussion has been around fairness in machine learning, and actually one of the interesting things about the discussion from a technical point of view is how you even define fairness and how you define removing biases and such, because a lot of the biases are inherent to the data itself. And how you try to remove those biases can be a bit controversial.

Ariel: Can you give us some examples?

Chelsea: So one example is, if you're trying to build an autonomous car system that is trying to avoid hitting pedestrians, and recognize pedestrians when appropriate and respond to them, then if these systems are trained in environments and in communities that are predominantly of one race, for example in Caucasian communities, and you then deploy this system in settings where there are people of color and in other environments that it hasn't seen before, then the resulting system won't have as good accuracy on settings that it hasn't seen before and will be biased inherently, when it for example tries to recognize people of color, and this is a problem.

So some other examples of this is if machine learning systems are making decisions about who to give health insurance to, or speech recognition systems that are trying to recognize different speeches, if these systems are trained on a smaller part of the community that is not representative of the entire population as a whole, then they won't be able to accurately make decisions about the entire population. Or if they're trained on data that was collected by humans that has the same biases as humans, then they will make the same mistake, they will inherit the same biases that humans inherit, that humans have.

I think that the people that have been researching fairness in machine learning systems, unfortunately one of the conclusions that they've made so far is that there isn't just a one size fits all solution to all of these different problems, and in many cases we'll have to think about fairness in individual contexts.

Richard: Chelsea, you mentioned that some of the remediations for fairness issues in machine learning are themselves controversial. Can you go into an example or so about that?

Chelsea: Yeah, I guess part of what I meant there is that even coming up with a definition for what is fair is unclear. It's unclear what even the problem specification is, and without a problem specification, without a definition of what you want your system to be doing, creating a system that's fair is a challenge if you don't have a definition for what fair is.

Richard: I see.

Ariel: So then, my last question to you both, as we look towards 2018, what are you most excited or hopeful to see?

Richard: I'm very hopeful for the FLI grants program that we announced at the very end of 2017 leading to some very interesting and helpful AI safety papers and AI safety research in general that will build on past research and break new ground and will enable additional future research to be built on top of it to make the prospect of general intelligence safer and something that we don't need to fear as much. But that is a hope.

Ariel: And Chelsea, what about you?

Chelsea: I think I'm excited to see where metalearning goes. I think that there's a lot more people that are paying attention to it and starting to research into “learning to learn” topics. I'm also excited to see more advances in machine learning for robotics. I think that, unlike other fields in machine learning like machine translation, image recognition, et cetera, I think that robotics still has a long way to go in terms of being useful and solving a range of complex tasks and I hope that we can continue to make strides in machine learning for robotics in the coming year and beyond.

Ariel: Excellent. Well, thank you both so much for joining me today.

Richard: Sure, thank you.

Chelsea: Yeah, I enjoyed talking to you.

View transcript

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram