Not Cool Ep 17: Tackling Machine Learning with Climate Change, part 2
It’s time to get creative in the fight against climate change, and machine learning can help us do that. Not Cool episode 17 continues our discussion of “Tackling Climate Change with Machine Learning,” a nearly 100 page report co-authored by 22 researchers from some of the world’s top AI institutes. Today, Ariel talks to Natasha Jaques and Tegan Maharaj, the respective authors of the report’s “Tools for Individuals” and “Tools for Society” chapters. Natasha and Tegan explain how machine learning can help individuals lower their carbon footprints and aid politicians in implementing better climate policies. They also discuss uncertainty in climate predictions, the relative price of green technology, and responsible machine learning development and use.
Topics discussed include:
- Reinforcement learning
- Individual carbon footprints
- Privacy concerns
- Residential electricity use
- Asymmetrical uncertainty
- Natural language processing and sentiment analysis
- Multi-objective optimization and multi-criteria decision making
- Hedonic pricing
- Public goods problems
- Evolutionary game theory
- Carbon offsets
- Nuclear energy
- Interdisciplinary collaboration
- Descriptive vs. prescriptive uses of ML
References discussed include:
- Tackling Climate Change with Machine Learning
- Opower
- Detecting anthropogenic cloud perturbations with deep learning
- Inequity aversion resolves intertemporal social dilemmas
- MIT Media Lab flight offsets program
- What we know about climate change
The behaviors are just not on the same scale at all. The amount that you emit from taking a flight is just orders of magnitude more than almost anything else you're doing in your life. Being able to actually track that and understand it could be very empowering for individuals to change their behavior in a meaningful way.
~ Natasha Jaques
Transcript
Ariel Conn: Hi Everyone. Ariel Conn here with episode 17 of Not Cool, a climate podcast. Today, we’ll dive into Tackling Climate Change with Machine learning, Part 2. On our previous episode, we heard from four of the 22 authors of that paper, and today we’ll hear from two more. Tegan Maharaj and Natasha Jaques will talk about how machine learning can be used to help us improve our own carbon footprints, how it can be used to improve climate policy, and much more.
Tegan’s most recent research aims to bring together the fields of deep learning and theoretical ecology. She has several active projects in ecosystem modeling with deep networks, including work collecting datasets, in multi-agent RL, counterfactual inference, and meta-learning. In January 2016 she began a PhD focused on deep learning research at Mila at the University of Montreal, and among other things, she recently co-organized a workshop at ICML call Climate Change: How can AI help?
Natasha is finishing her PhD at MIT where she researches how to improve the social and emotional intelligence of AI and machine learning. She has interned at Google Brain, DeepMind, and was an OpenAI Scholars mentor. She received an honourable mention for best paper at ICML 2019, a best paper award at the NeurIPS ML for Healthcare workshop and was part of the team that received Best Demo at NeurIPS 2016.
Natasha and Tegan, thank you so much for joining us.
Natasha Jaques: No problem.
Ariel Conn: You're both authors of this “using machine learning to tackle climate change” paper, which is a huge paper. We've interviewed some of the other authors as well. And I mean, my first question for both of you is just how did you get involved in working on this paper?
Natasha Jaques: Well, I've always been wanting to participate more in helping the climate in whatever ways that I can, because I do think we're facing a global climate crisis, and hopefully my machine learning expertise could be useful for that. So this is my first foray into working in this area. I was recruited by the first author, David Rolnick. I was asked to work on my section because my work in the Media Lab relates a lot to interpersonal and social aspects of human communication.
Ariel Conn: All right. And Tegan?
Tegan Maharaj: I met David at a lunch at the NeurIPS Conference organized by David Rolnick and various people from MILA, where I am a PhD student. We started talking there and — with Priya Donti, one of the other main authors on this paper — me, David, and Priya sort of brainstormed some ideas both for a workshop and a paper; and it all happened. I think it was basically David's brainchild and anybody he ran into who he thought could contribute well was brought on board.
Ariel Conn: Awesome. Before we get any farther, it's probably fair to assume that most listeners understand what machine learning and reinforcement learning are, but if you could just quickly explain what those terms mean and how they're different.
Natasha Jaques: Sure. Machine learning broadly tends to mean like the automatic recognition of patterns in data, so maybe discovering clusters of similar data or predicting trends given a bunch of past data. Reinforcement learning could be considered sort of a sub area of machine learning, but it really focuses on where you have an AI agent that's trying to interact with the environment. The agent takes an action, and the environment gives back some type of reward. The agent is trying to optimize for that reward, but it's not doing it greedily. It's not just trying to say, "I want to get the maximum reward I can right now," but it's trying to do sort of long-term planning to achieve the most possible reward over the course of the future. So we think about it as sequential decision making. And that's why it differs from just one step prediction that we see typically in the rest of machine learning.
Tegan Maharaj: I think Natasha pretty much covered it. The way that I usually describe machine learning, as opposed to any other computer algorithm that you would encounter: the fact that it's learning means that rather than having hard-coded rules for how to behave — like when you click this button, the algorithm will do this thing — the algorithm is trained from lots of examples how to sort of behave in a certain way so it can recognize patterns that are sort of fuzzier than the rule based systems that people used before machine learning became very popular. And the reason that this is difficult and took a long time is a lot of data, a lot of examples are required to train these kinds of algorithm. Techniques for doing that well took a while to figure out.
Ariel Conn: Okay. I'm really excited to have both of you on because we've been doing this podcast for a few weeks now, and for the most part it's talking about what climate change is and why it's so bad, but we're sort of limited in solutions. And so we do get into a lot of the solutions that are covered earlier in the paper, but one of the things that I really liked about your sections is that you look specifically at what people can be doing at an individual level and what we can be doing at a societal level.
For listeners, Natasha was the author of the “Tools for Individuals” section; Tegan is the author of the “Tools for Society” section. So we'll be asking them both questions about their sections but, as I've told them, hopefully they'll also both be interjecting with their own thoughts, even if it wasn’t technically their name attached to the section. So Natasha, let's start with you. Yours is tools for individuals. I really love the idea of using machine learning to calculate individual carbon footprints, which is one of the things that you talk about. Can you first explain some of the ways that that could be done?
Natasha Jaques: If a person is willing to give us some of their data, we can do simple things to extract information about their personal carbon footprint. For example, it could extract information about what flights you are taking from your email and automatically calculate the carbon footprint of that, or of the groceries that you're buying. You can hook it up to your ride sharing apps; we can calculate the carbon footprint of the amount of Ubers that you're taking; and then potentially present this to you in a way that makes it very easy for you to see, what are your most high emitting behaviors, and focus on the things that really matter if you want to reduce your carbon footprint.
What we learned from this is that the behaviors are just not on the same scale at all. The amount that you emit from taking a flight is just orders of magnitude more than almost anything else you're doing in your life. Being able to actually track that and understand it could be very empowering for individuals to change their behavior in a meaningful way.
Ariel Conn: You talked about creating apps that can tap into my email and track emails I'm getting. In the paper, you also talk about ways that commercial systems could be implementing this. I was hoping you could talk a little bit about some of the tradeoffs between, say, me adding an app, or Uber or Delta or some other company tracking this — what are some of the tradeoffs between me having more control of it and a commercial system setting it up?
Natasha Jaques: That's a great question. For the individual, obviously, if you want to give up some of your data, you have to give up some privacy. That's a concern. We do have better and better machine learning algorithms that can work on device, so you actually may not have to worry about privacy as much; but privacy is a concern. But then with doing this from a big institution, like let's say we wanted a grocery store that could print the carbon emissions of every item you buy on your bill: well, that takes a lot of buy in from the grocery store, and it's not clear that they will be super motivated to do this. To the extent that those institutions aren't willing to put those programs in place, it can be more individually empowered to build those on their own or buy into those.
Ariel Conn: I guess if we could get a combination of both, it seems like that would be ideal. Is that your take?
Natasha Jaques: I think so. I mean, the more information that's available, the more it's going to help the individual to make better decisions.
Ariel Conn: Are there examples of applications that have already been designed that we can start looking into, or is that something that you're hoping to motivate ML researchers to create?
Natasha Jaques: People are already starting to work on this. There's a few apps that are starting to come out that we reference in the paper; if you're curious, you can go check them out. And so, it might be something that you could see on your phone within a few months.
Ariel Conn: Awesome. Moving on in your section, one of the things that I thought was interesting as well is that it turns out our homes account for 30% of global electricity consumption. To what extent did you look at this on a global scale where we can say homes account for 30%, versus, say, homes in the US?
Natasha Jaques: I do have the US figure for you if you're curious.
Ariel Conn: Yes.
Natasha Jaques: I think residential electricity usage in the US is actually 21.8 — that's a study from 2014 from the US government. I tended to try to look more at global figures, but I think you could find both in the paper if you were curious.
Ariel Conn: Were there countries in which homes do better or worse? I'm actually surprised that you're saying that — if I'm understanding what you're saying correctly — that homes in the US are below the 30%?
Natasha Jaques: It's a complicated question because it depends on how much energy different industries in the rest of the country are using, so it's going to vary widely.
Ariel Conn: Okay. We'll move on from that. You also say that standby power consumption accounts for 8% of residential electricity demand, and if I did my math right, that means that the standby power that we're using in our homes — so that's the power that we're not actually using; it sounds like that's just stuff that's plugged in. Is that correct?
Natasha Jaques: Right.
Ariel Conn: That actually is accounting for 2.5% of electricity consumption overall.
Natasha Jaques: It's pretty surprising how much standby power can consume. There's an interesting reference that shows that a laser printer that's just plugged in, but sitting idle, is actually consuming 17 watts an hour, which is apparently the same consumption of a fridge freezer. So it's just massive. If you actually look into this resource a little bit more, big screen TVs, flatscreen TVs, a lot of electronic devices, even random things like pottery wheels are incredibly power consumption heavy when they're not in use.
There's a lot of these devices that people may leave plugged in that they're just not aware of at all how much energy they're consuming. And similarly if you have a second fridge downstairs that you don't really use very much, that can be very expensive in terms of power. A nice role we see for machine learning to play is doing this energy disaggregation and identifying which devices are consuming the most energy in your home and at what times, and making this information available to the consumer so they can make smarter choices about what they're plugging in and when.
Ariel Conn: Wow. So even without machine learning, have you found that you're unplugging things more?
Natasha Jaques: Well, I don't have a laser printer at home, but if I did, I'd be unplugging it.
Ariel Conn: Right.
Natasha Jaques: Right. Yeah.
Ariel Conn: Okay. And then, so how do machine learning systems like that work?
Natasha Jaques: There are a couple of different really interesting options that you could do. One of the most promising solutions seems to be to plug in a device at the main electrical connection from your house to the rest of the grid. And then you can actually use that aggregated energy signal, in combination with more and more sophisticated machine learning techniques, to disaggregate that signal into a time series of which appliances are coming on at what time, and how much energy they're consuming. And so by making that information available to the homeowner, they can make better decisions about this. And then you can even go one step farther, and you can start doing something really cool — which is, if your devices are outfitted with this capability, you can remotely turn them on and off at the appropriate times to minimize your power consumption.
Imagine that you want to be able to charge your electric vehicle at the right time so that you're actually using sustainable energy. If your grid uses a lot of renewable energy, like solar and wind, often the grid still needs to have backup power that's actually pretty carbon intensive. Maybe there's like natural gas or coal backup power that needs to come on if it's a cloudy day or there's no wind. You can actually use machine learning to predict when the energy being supplied by the grid is the most green, and turn on your devices at that time. So that could really help reduce emissions.
Ariel Conn: These systems that you're talking about would be connected to the individual home? Or would they also be connected to the power grids?
Natasha Jaques: Yeah. My part of the paper focused on devices in the individual home, but we cover optimizing power grids more broadly in different sections of the paper.
Ariel Conn: So what you're talking about — again, it would be an instance where the data could still be kept private for the individual.
Natasha Jaques: Yes, exactly.
Tegan Maharaj: I think you don't even have to have any climate-related motivation to want one of these things. You could just want to save money and this would still be a good thing to want in your home.
Natasha Jaques: That's exactly right because it turns out that when the grid is the most green, the energy is also the cheapest. So it saves the consumer a lot of money to actually be more green.
Ariel Conn: Have you found that in general, does it tend to be cheaper? Or are you finding there's sort of a balance, where some things are more expensive — to implement these systems, but in other areas you're saving money?
Tegan Maharaj: I would say in the short term it's often the case that greener or climate friendly solutions are harder to implement because they're a change to the status quo, so that makes them a bit more expensive. But in the long term — even long term being like over a couple of years, and certainly in the long term over 50, a hundred years — virtually all of the climate and environmentally friendly solutions just make economic sense. They're more efficient in terms of resources; they maintain our resources for a longer time so that future generations can use them better; and they let systems sort of be more efficient. It's really a win-win if you look at it over a longer time horizon instead of being very myopic and only caring about your profits in the next quarter type of thing.
Natasha Jaques: Nice.
Ariel Conn: Yes, thank you. Continuing with the tools for individuals, one of the things that I've read about, unrelated to machine learning, is just that energy companies, often when they send out the electric bill, they will include how a household's energy usage compares to their neighbors — and that often people will modify their usage as a result of what they're seeing as this comparison to their neighbors. And it seems like in one of your sections, Natasha, that you're talking about taking this a step further and targeting specific households with messages that are more directly applied to them in order to help them modify their energy usage. Is that correct, or can you explain how that works?
Natasha Jaques: Sure. The thing you mentioned about showing people what their neighbors are consuming, like, "Oh, it turns out you use 10% more energy than your neighbor," is very effective. This was actually a startup called Opower that did that and showed how effective it was. That's actually quite interesting. And it's very cost effective to reduce energy consumption rather than try to produce new energy. So they basically did this in the cheapest possible way, even cheaper than building any new power plant, so that's kind of cool.
But with respect to using machine learning to identify certain households that we talk about in the paper, what we're trying to describe there is that it turns out people are very, very different in their willingness to pay for and be motivated by climate programs. In one study, it found that some consumers are willing to pay any price to reduce the emissions of their energy consumption — they're very insensitive to cost; and yet there was another group that was willing to pay zero. So they care absolutely zero about the emissions of their energy consumption.
What machine learning can help us do is use clustering and demographic information to try to find those people that are actually motivated and care about these programs and try to provide them with resources that allow them to participate, rather than trying to waste time trying to recruit everyone into a program like that.
Ariel Conn: How would a machine learning system like that work? How would it identify the groups who are more likely to care about this?
Natasha Jaques: You can use information about a person's household energy consumption, their location, size, their demographics — things like this.
Ariel Conn: Okay. And then I asked this with one of the earlier questions, but we've talked about a few more machine learning systems here: what already exists? What can people already start doing? And if machine learning systems don't exist yet for some of these, what are the barriers to their creation?
Natasha Jaques: There's definitely a long history of, for example, energy desegregation research, and the ability to identify which appliances are turned on from an energy signal. There's also research papers on this stuff about identifying different households and modeling their behavior. A lot of these I don't think have made it into products yet. We're starting to see these apps that I mentioned come out, but what we really need — what's a barrier, and the real motivation for doing this paper — is we just need more people to be working on this. We're really hoping, through this paper, to motivate people to both provide data sets and motivate machine learning researchers to bring their expertise to these problems and develop better and better algorithms.
Ariel Conn: And then also, again, we touched on the issue of privacy a little bit with some of the individual questions I was asking — but more broadly, as you're talking about applying machine learning systems to individuals and individual households, how can we be ensuring that people are able to maintain their privacy?
Natasha Jaques: A lot of the systems that I've been talking about, especially for like tracking your individual carbon footprint or optimizing the appliances in your house, are really up to the consumer to opt in. If they're excited about this and they think that this would provide them some value, then they might be willing to provide their data. But it's definitely not something that you would compel someone to give you their data. The nice thing is that machine learning techniques which allow an individual to maintain their privacy by keeping the data on the device are becoming more and more mature. Federated learning is something that a lot of people are working on, which allows you to just make predictions with the data never leaving the device.
Ariel Conn: And so, a last question that I had was based on me reading the paper as opposed to talking to you — and talking to you and getting some of these better explanations, I don't think it applies as much, but I'm going to ask it anyway, just in case anyone else reads the paper and has a similar thought. And that was that some of these ideas actually seem a little bit based on sort of psychological manipulation, trying to convince people — using data about them, trying to convince them to make these different decisions about ideally improving how they're making decisions about climate change. And to a certain extent, as someone who would like to be making better decisions for the climate, I would actually value this. But, I mean, psychological manipulation just sounds terrifying in general. So I was curious, how do you ensure that we're creating machine learning systems that, I guess, just don't have that creepy factor?
Natasha Jaques: I'm glad you asked that question because we absolutely are not proposing to do any sort of psychological manipulation whatsoever. That's not on the table. We want to make sure that we're proposing acceptable solutions to everyone that people aren't going to find problematic in any way. We really want to make sure people have autonomy to make their own decisions. And actually what we're trying to do here, really, is provide consumers with just better information so that they can make more informed decisions, and it can empower them to feel like they have the ability to reduce their behaviors, if they want to, in the most effective way.
So we're about solutions like using machine learning to better visualize data, because climate change is a very complex topic, and it may be hard to understand all the different sources of information. We're talking about machine learning that can predict flood risks in various areas, that people could use this when they're buying a home. And then of course, all the systems we just talked about that help you understand your personal carbon footprint or help you optimize the energy use in your home; so, serving the needs of consumers, as well as helping them to be more informed.
Ariel Conn: Excellent. The final followup question that I have is, you use the example of a grocery store could print out the carbon footprint of whatever I just purchased so that I can have a better idea of the impact of the food that I'm eating. One of the things that I've found is that it's just so incredibly complicated to try to figure out what the impact, the carbon impact, of all these different decisions I'm making is. Do you think that the suggestions that you're making here can ultimately help us track all of these different super complicated systems? Is that how this can be used?
Natasha Jaques: Well, it is really complicated, and I think it can be very overwhelming. We really see these systems as a way to simplify that for an individual so they can understand what of their behaviors actually really matters, and what are just a tiny, tiny fraction of the emissions of a different behavior. So if you think about optimizing some of your groceries: now, beef consumption we know is actually a pretty significant part of your carbon impact, but some of your groceries may be a pittance compared to taking a flight.
There can be kind of an identity politics around climate change that there doesn't have to be. We shouldn't make it that if you have to care about the climate, you have to be extremely strict with yourself on every front, and you can't ever use a plastic bag or you're not a true climate believer. Having something like that could be even deterring individuals from feeling motivated and encouraged to change the parts of their behavior that do matter. Distinguishing meaningful factors for reducing emissions could be really important.
Ariel Conn: I think that's really valuable. I definitely see this idea of, if you aren't perfect, you're being a hypocrite and why even bother?
Natasha Jaques: Yeah, and that's just harmful. That's not helpful at all.
Ariel Conn: No.
Tegan Maharaj: To that point, what you were saying about it being so complicated to predict one number for the carbon emissions of this banana, or something like that?
Natasha Jaques: Yes.
Tegan Maharaj: I think an important part of scientific communication is communicating uncertainty about where numbers come from. Something like a plus or minus five or whatever on the number that you give could also be very helpful. But I think with the knowledge we have about global supply systems and about the climate at this point, we can offer numbers that are much better than nothing. We are not going to be totally off base in the numbers that we're estimating. There's uncertainty, but there is also good information out there that I think should be provided to people.
Ariel Conn: Yeah. I think that's actually been another really interesting problem with the climate change debate, for lack of a better word, is this idea that it seems like a lot of people who don't understand how science works are looking for certainty. And that's just not something that can be — it doesn't exist. Even things we're very certain on still have some level of uncertainty.
Tegan Maharaj: Right. There's this difference between, "I'm not sure whether this is going to happen," versus, "I'm not sure if this is going to happen at all," kind of thing. We can be very certain that something bad is going to happen. Maybe we don't know if it's going to be 10 millimeters of rain, or 13, or 14, or 25, but there's a difference between not being able to forecast the exact weather in a certain place on a certain day, and not knowing if the weather on average is going to be snow versus sun for that day.
And the uncertainty — when we say uncertainty in English, that means, "I don't know, I have no idea." But that's not what it means in science. It means, "We don't know the exact number," and those things are very different.
Ariel Conn: I think that's a really important point. I'm glad you brought that up because it's definitely something that I see in the discussion around climate change, is that people don't understand how scientists use uncertainty.
Tegan Maharaj: Another thing that people mention a lot is that most of the uncertainty in climate models is not symmetric. Let's say we're estimating that global temperatures are going to rise by 1.5 degrees celsius. The uncertainty around that number is on the upper end, not on the lower end. So, there's this skew toward, "Okay, we're not sure. It could be way, way, way, way, way worse." We're not sure, but definitely it's going to be pretty bad. And it's not like we're sure it's going to be 1.5 exactly, but we are sure it's going to be something. There's no chance that everything just stays the same.
Ariel Conn: And there's no chance that it decreases?
Tegan Maharaj: No.
Ariel Conn: Okay. Tegan, I think this is a good time to transition into the questions about your section. So moving from this idea of how we can help individuals to also recognizing that there's only so much that individuals can do, and society as a whole — and especially politicians and policymakers — need to start also taking action.
My first question for you is, in the very first part of your section you talk about tools that can be used to help understand how the public would respond to different policies. Could you explain what that would look like, and some examples of how machine learning could be used for that?
Tegan Maharaj: Sure, yeah. I think machine learning is already being used by a lot of social science, and political science, and certainly economics researchers. And in the context of understanding public response to policies, I think most of the work that's done is more retrospective. It's not in planning an actual policy rollout in the government or a political party that they're using much machine learning to see how the public will respond. It's more like, this type of policy — for instance, pricing carbon emissions and taxing people based on them, or something like that — has been tried in these different scenarios; how can we analyze the results of those different roll-outs and see how we can do better?
And if those results are analyzed quantitatively — maybe with machine learning, maybe with descriptive statistics — and then considered holistically with other factors: like how constituents of an area — what they care about, what the concerns of local businesses are, et cetera, et cetera, et cetera. So machine learning is just one part of policy analysis, but it is an increasingly important part, I think, as more data becomes available.
And some of the concrete things that I think policy analysts are using more and more are things like scraping social media or the internet for a lot of text data, where people are talking about a certain topic, and then using natural language processing techniques like sentiment analysis, which predicts what people maybe liked or disliked, or whether they feel generally positive or generally negative toward something; or maybe it's not a number, maybe it's categories like they support or don't support a certain type of legislation. So using this kind of natural language processing approach, policy analysts or social scientists can analyze huge volumes of data that wouldn't be possible with a person going through all of these tweets. And that kind of information can really help social scientists understand how people view something, and aggregate their preferences in a way that is more data-based than having small focus groups can achieve.
Ariel Conn: Okay. And so to clarify, it sounds like so far these systems have been used to help understand responses to something that has already occurred. One, is that correct? And then two, moving forward, would you also then expect to see policymakers applying this information they've already gathered towards developing new policies? Or do you think it's mostly useful for better understanding and responding to public opinion?
Tegan Maharaj: I think it is already used for at least informing people's opinion about what to do next, like for a policymaker to look at aggregated data about what policies have been effective or not effective in the past. Of course, it's information for them, it's going to inform their decisions about future policies. But I don't know of any current machine learning techniques that are used to explicitly optimize a policy for deployment in the government.
The thing that's difficult there is that you have to represent the space of possible policies in something that a computer can understand. You have to be able to write it down mathematically or in a programming language. And necessarily when you do that, you miss things. So people do this and then they look at how, for instance, in multi-agent RL, different agents would interact, and giving different policies in that environment, how that would affect the different agents in an environment.
People also use multi-objective optimization to try to optimize for different objectives like reducing climate emissions while also optimizing profits and maximizing maybe the amount of fresh air, or something, other things that stakeholders care about. But I don't think we're at the point where we want to just blindly apply a machine learning system to this because in writing down formally these descriptions of systems to be optimized, I'm repeating myself, but we necessarily miss things. You have to consider them in their environment with a lot of factors that can’t easily be encoded in math or in a programming language. Really, machine learning is just a tool. It's a tool that policymakers can use, and it's only one aspect of what they do.
Ariel Conn: Regarding it being a tool that policymakers can use, to what extent is that actually the case, versus do policymakers need to be working with other researchers who specialize in machine learning? Are we developing systems that a policymaker can then take and apply to scrape social media themselves? Or do they need help?
Tegan Maharaj: I would say there are a lot of systems at this point that can be applied or deployed by somebody who doesn't have a lot of machine learning expertise. But both for people who have a decent degree of machine learning expertise and total non-experts, I think — like any tool – machine learning can be used improperly. People who use it have a responsibility to understand at least a bit how it works so that they make sure it is not doing things like amplifying biases that exist in the data in a way that is dangerous for stakeholders, or targets minorities, or something like that; or giving an algorithm more a sway in a decision-making process, when there may be considerations that the algorithm can't make because those things were not encoded in the environment, or in the structure of what it was optimizing.
So, I think there's a responsibility for anybody deploying a machine learning algorithm to understand the tool. But on the other side, there's also a responsibility for machine learning researchers to make this kind of information available, and to make their tools easier to use, less dangerous, more interpretable, so that they can be used for good, for the useful purposes that they can be deployed for.
Ariel Conn: So, one of the things that you wrote about that I was really intrigued by is this idea of helping decision-makers and policymakers design market prices that are associated with social good. And I was hoping you could just explain how that would work?
Tegan Maharaj: Yeah. So, machine learning algorithms are really good at optimizing something, which means taking some number and finding a solution in the space of possible solutions that makes that number as high as possible — like playing a game, or something, and maximizing your score. So if we can set some kind of number like minimize emissions — we want missions to be zero, or some low number — then a machine learning algorithm can optimize, for instance, a stock portfolio for achieving that number. Or in complicated decisions like deciding where to put a hydro dam, or some other social infrastructure project that needs to be done, a machine learning algorithm can help weigh all of the different factors, like how much energy will be produced over the long term? What is the availability of the raw materials for the energy source? How much will it cost to construct? How long will it take to construct? How will local environments and ecosystems be affected? How will local people and businesses be affected? How the economy would be affected?
There are lots of factors that have to be weighed here. And when we can write those down, a machine learning algorithm can optimize those to find a good balance between all of the different objectives. So the thing that machine learning can help with here especially is when we don't know exactly what the number is, but we know that that number is affected by a bunch of different criteria like the ones that I just mentioned. So the machine learning algorithm can both come up with the number, and then also come up with the way to balance across the criteria to best achieve an objective.
Ariel Conn: And are there examples of this being implemented already, or is it still something that's in the test stages?
Tegan Maharaj: The fields of multi-criteria decision making, multi-objective optimization, have been using machine learning optimization techniques for decades, really. Usually they're at the scale of something like an individual factory, where the operation of the factory, or maybe the relationship between multiple factories — like shipping lanes and things like that — that network or system can be easily represented in a computer. And the amounts of, for instance, raw materials and energy that are shipped between the factories, or between parts of the individual factory, can be quantified and measured. And then, all of these things can be analyzed by a machine learning algorithm to optimize a factory, or something like shipping routes. So the whole field of operations research has been doing this for a long time and they have really well-developed optimization methods for doing this kind of thing.
The thing that I would love to see be developed in the near future is using machine learning, maybe something like meta learning or transfer learning, to be able to coordinate the activities of many of these systems at a much larger scale than one factory, or something with 10 or 12 components to it. The techniques for doing that kind of decision making and optimization are much more difficult because there are many more factors to consider. And many of the methods that are developed are developed around finding exact solutions or computing things exactly, and that's something that just doesn't scale very well.
And the newer machine learning methods that we have — which learn from examples, rather than from solving exact mathematical formulas — might be able to help a lot with scaling this to huge numbers of factories in a graph, or pieces of a factory, or very large systems like the operations of a country, so that we could apply the same kind of techniques and reasoning to analyze much larger systems.
Ariel Conn: Okay. So something else that you mention in your section is hedonic pricing. Can you explain what that is, and talk about how machine learning can be applied to this?
Tegan Maharaj: Yeah. Hedonic pricing is basically the practice or idea of inferring value of something based on people's, for instance, purchasing decisions, or how people behave in a market, rather than when you don't know the value of it explicitly. This has been used a lot to guess how people value homes for housing prices and for other goods that it's hard to quantify a value for. It can be used for non-tangible goods like carbon prices, carbon emissions, to understand how people value that.
The thing that makes it different from just market pricing is inferring the value on multiple criteria, and that is an ideal place to apply machine learning. This is not something that has been done, but speculatively people could employ hedonic techniques to estimate how much people value bad things instead of goods — so, people call these “bads”; like air pollution, or the stress of worrying if their home will be affected by a climate disaster, or things that are very intangible like this — and assign a price to those things.
Ariel Conn: So would that be similar to or different from a carbon tax?
Tegan Maharaj: It would be one methodology or one way to come up with the value for a carbon tax.
Ariel Conn: Okay. And then how would machine learning be used for that?
Tegan Maharaj: A carbon tax is maybe less applicable for hedonic pricing because emissions can usually be quantified, so it's not like you have to infer the quantities or the value of those emissions. They have a relatively logical quantity. But if you wanted to incorporate aspects, like I mentioned, of the stress of worrying about climate change in the future, or the value of your grandchildren having butterflies, and fresh air, and things like that — those kinds of things, you could survey people, maybe, and get them to rank how much they care about these things.
Anything that can allow you to quantify how much people care about things, you can then aggregate that information from many people and try to estimate numbers that correspond to these kinds of intangible things. And you can make this a multi-step process where you then show those numbers to people to ask if that makes sense compared to the value of some other thing.
Some of this turns me off a little bit. I think a lot of this type of thing is, we're trying to assign value to things that are essentially priceless, and it feels wrong in some way. But the thing is, if you don't do this kind of thing — if you don't assign a market value to intangible things that we care about, like nature and health — then they get overlooked by our economic system. So we really need to do this sometimes in order to make sure that these things are valued as much as we value them.
Ariel Conn: And so, in your section as well you do talk a lot about multi-criteria decision making and multi-objective optimization. If you could just again sort of explain what those are and how they could be applied to climate change?
Tegan Maharaj: Yeah. Multi-criteria decision making is basically what it sounds like, making a decision when you have to weigh multiple criteria. An example that I use often is something like having to decide on a new hydro dam or source of energy. There are a lot of different criteria to weigh economically with the local stakeholders, the environment, the amount of energy produced, et cetera, to weigh in that decision. And one way people might solve a multi-criteria decision making problem is via multi-objective optimization, a family of computational techniques for solving multi-criteria decision making problems when you can write down the multiple criteria as different objectives that you want to achieve — which is not always the case. But when you can, machine learning techniques become very useful and applicable.
Ariel Conn: All right. You also list quite a few examples where machine learning researchers either have worked with climate experts or can work with climate experts to tackle specific problems. So maybe you could pick one or two examples of these that you found most interesting, maybe most helpful, and talk about what's been done.
Tegan Maharaj: Off the top of my head, one of my favorite papers was the paper that received the Best Paper Award at our workshop at ICML. It was about detecting anthropogenic cloud disturbances using machine learning, specifically using convolutional neural networks of the same kind that are applied to recognizing whether an image contains dogs or cats, or things like that.
The reason I really like this paper is that they framed the problem in a unique way. This wasn't a problem that was on my radar anyway, but they thought, clouds look different — there was a climate scientist on this, it's not just like they had this thought randomly — but that different clouds look different depending on whether they're produced by natural processes, or anthropogenic sources like factories or ships going across the ocean. And what they were doing was using convolutional neural networks, computer vision, to look at images of clouds and see whether those clouds were produced anthropogenically or by natural processes. And this let them track ships and pollution across the oceans much more accurately than any measurements that we have, because a big part of the problem for a lot of things related to climate is that we just don't have ways to measure what is being done. So if we can do it by computer vision on satellite imagery, to be able to find where the biggest sources of pollution and anthropogenic disturbance of the atmosphere are, that's a big win in my mind.
Ariel Conn: If I'm understanding this right, you're saying we can use satellite data to actually quantify how much is being emitted from these different sources?
Tegan Maharaj: For instance, yeah. Or at least, maybe not the exact — future progress in this line of work would be to identify what exact chemicals are in the cloud based on the formation of the cloud, which is maybe possible to do and is really cool. But what they were doing is just trying to tell whether it was anthropogenic or natural — which, if you look at a cloud, you're like, "Oh yeah, that is clearly coming out of the smokestacks there,” and jet trails or contrails look really straight; they look very different from clouds. And it turns out, kind of unsurprisingly, that you can train machine learning algorithms to recognize the same type of thing, and this allows us to track anthropogenic cloud disturbances across the ocean that we just couldn't do before.
Ariel Conn: Oh, that's really interesting. Okay. Were there other examples that you wanted to mention?
Tegan Maharaj: It's not a specific paper, but I'm really interested in a line of work that is being done at the intersection of game theory and mechanism design and multi-agent RL in trying to solve cooperative problems, public goods problems, using machine learning techniques like multi-agent RL. So what public good problems are, is things like tragedy of the commons, where there's some area of common grass where all the sheep can graze, and if there are no rules about how to use it, the incentive of every farmer around there is just to use it until it's gone for their own sheep. But then there is no more grass for anybody to use. So in the long term it's bad for them. I would state public goods problems generally as short-term incentives for behavior not being aligned with long-term incentives for the good of the group.
Natasha Jaques: So this is something that I've actually been working on recently. We coded up some toy versions of those types of problems — so a tragedy of the commons problem and a public goods problem — so they're available for multi-agent reinforcement learning researchers to work on.
Tegan Maharaj: I think this area of research is super cool. I'm doing some current work on modeling this kind of problem in ecosystems. Ecosystem modeling uses techniques a little bit similar to multi-agent RL, but usually involving slightly different models for how agents and the environment interact with each other — and particularly there's usually not this environment that is kind of all powerful and tells the agent what to do; the environment is modeled as something more like another agent. And I'm hopeful that we can do interesting things to solve this kind of public goods problem.
Another interesting area that I hope machine learning researchers get more into is evolutionary game theory, which is basically game theory where you're considering how your actions or the actions of other people affect not just yourself, not optimizing your own utility exclusively, but considering the evolution of the group as a whole. This kind of game theory or incentive structure really impacts human decision making. So just if we want to understand human decision making better, it is useful — but also it helps us make better decisions in public goods problems.
Natasha Jaques: There's been some interesting research on this out of DeepMind. So there's a nice paper recently that basically gave agents an inequity aversion motivation. So an RL agent doesn't want to have its rewards differ too much from the rewards of any other agent, or else it will get "guilty" or "envious." And they show that actually if agents have this motivation, then they can better solve these public goods and tragedy of the commons problems. So if you feel guilty if you harvest up all of the resources in the environment too quickly, then it preserves the environment better.
Tegan Maharaj: There's a lot of biological research suggesting that social species depend on — it's often called local enforcement, or individuals either guilting each other into doing something better or rewarding each other for doing things that benefit the whole population. So I think it's really cool to see computational techniques doing this kind of thing.
Ariel Conn: I want to come back to the technical requirements for some of these systems and what policy makers need to be able to do themselves, versus whether we just need to get more machine learning researchers involved in policy. And to a certain extent, honestly, it sounds like we need to get more machine learning researchers involved in policy. And I was curious what your take on that is.
Tegan Maharaj: I would say that as individuals in society, we need to be more involved in the decision making of our society. I think people in general today are very disillusioned with political systems, and I include myself there, but that doesn't take away the need for the policies that we make on a social scale to reflect the priorities and needs of everyone. And everyone has some responsibility, I think, to make that kind of knowledge known, to refine their preferences, to help aggregate them between groups of people. And it's a difficult process. We haven't figured out how to do it completely. We need to try new things. We need to make our political systems better and more representative. And machine learning researchers aren't an exception to that, especially now that we are developing tools that are impacting this kind of system and also developing tools that could make those systems better.
In a way I think yes, machine learning researchers need to be involved more in policy. I think everybody does, and I think if you're a machine learning researcher interested in this kind of thing, you're in a really great position to help develop better tools that can foster collaboration between policymakers and machine learning researchers, make sure that the tools that machine learning researchers are making are used responsibly and effectively so that things get better for everyone.
Ariel Conn: I think that's a really important point: the idea that this isn't just about getting machine learning researchers involved, that all of us need to be more involved. And I think that nicely transitions to the question that I have for both of you now, and that is: how have your own habits changed as a result of working on this paper — either in terms of trying to be more involved in policy, or trying to get more involved in developing systems that people can use, or even non machine learning solutions? What, if anything, are you doing differently?
Natasha Jaques: Because I was researching this section on your individual carbon behaviors, I looked into exactly how much my different behaviors cost in terms of emissions, and I found that my flying and beef eating are definitely just dominating that. So what I've started doing is actually purchasing carbon offsets to offset my flights and any beef that I eat. And actually you can offset the rest of your emissions for almost nothing once you do that.
I was also looking into the viability of carbon offsets, and it turns out that they are still really impactful, because there's still a lot of low hanging fruit around the world we can do to lower emissions — like for example, just funding people to buy cleaner cookstoves or funding the development of cleaner power plants. So I've been purchasing carbon offsets, and I actually started instituting a program in our media lab department here at MIT to offset flights for work-related air travel. So the media lab has agreed to start a pilot program to offset students' and researchers' flights.
Ariel Conn: That's excellent. That's been something that — it's certainly my biggest carbon footprint, is the flying that I do. And I try to offset what I'm doing, but it seems like if someone is flying for their organization, it seems nice for the organization to be contributing to the solution, I guess.
Natasha Jaques: I think it is really good for an institution to do that, because for a lot of us, traveling is part of our job. For those of us who are doing research, meeting at conferences is a really important part of your career. It's very hard to just give that up. So I think offsets are a really promising way to focus on this, because, I mean, they're not perfect and there is some controversy around offsets, but looking into them further you can see that there's just a lot of climate infrastructure that could be funded still, and actually have long-term impact. So I do think carbon offsets are something to look into. And if you're interested, you can go to offset.media.mit.edu to check out that program. We're hoping it's going to spur other departments to adopt something similar.
Tegan Maharaj: Similarly to Natasha's lab — this wasn't my initiative, it was more I participated with Alex and Yoshua, the head of our lab — MILA is looking into something like this. And ICLR, the International Conference on Learning Representations — the conference is going to offset all of the costs of travel for everyone going to the conference. And I think it's also a great initiative for conferences. So that's not about me.
I've been working somewhat in this area for a while, so I think a lot of the changes I've made have been over the past, I don't know, decade or something. I'm vegetarian, I bike everywhere, I try not to fly. But the things that have changed most are probably my research focus and my ability to supervise and collaborate with people who are wanting to work on projects related to climate change. This project gave a lot of visibility to applying machine learning to climate change, and I get a wonderful number of people contacting me to ask, like, "Hey, I'm interested in doing this. How can I help? What can I do?" And I get to talk to people every day about cool projects that they could do that have a climate impact.
One of the things that I've gotten most, I guess, from the paper is concrete resources for assessing what are going to be the most productive or efficient uses of my time, or somebody else's time, in applying ML to climate change. Because there's no way that I as a machine learning researcher can become an expert in all of these areas, so it was fantastic that we have all of these excellent researchers from many different fields contributing to the paper. Having that information, and also these contacts in other fields that I can refer people to, has been hugely helpful.
Ariel Conn: So have you both generally found that you are getting a good response to this paper?
Tegan Maharaj: Yeah, it's really funny. A lot of the response that I've gotten is like, "I saw the title and I thought you machine learning researchers were being obnoxious again thinking you could save the day. But then I read the paper and I realized you didn't mean that. I think the paper is actually really good." So it's like a one-two punch kind of feedback.
But I think it's really good that people are actually reading it, because I do think that our message is really not that machine learning is a magical wand that we are trying to wave and save the world. It's really not that. Machine learning is not a magical solution; it is not going to solve all of the problems. But every little bit helps, and everything that we can do as machine learning researchers can be applicable to problems of climate change. You can have a great career in machine learning and work on really interesting problems that push the edges of our knowledge and understanding — and help the climate. That's the kind of message that we want to get out there.
Natasha Jaques: I think people have been generally really enthusiastic about the paper and I find that really encouraging, but what's even more exciting for me than just the paper is the workshops that Tegan and David and a lot of people have been continuing to organize, and the participation I'm seeing in those workshops. So there's actually concrete evidence of researchers that have machine learning knowledge taking that and applying it to these problems. That's been really encouraging.
Ariel Conn: And overall, do you both feel hopeful that we'll be able to address climate change in a timely enough fashion?
Tegan Maharaj: My basic answer to this is yes and also that I'm not really sure there's this timely enough fashion. There isn't some magical day in the future, past which the world will explode, and if we just finish it as the clock counts down to the end of the movie, then we'll all be fine. It's a continuum. Every day that we don't take action to make things better, it gets worse. There's less and less chance that the environment, long term, will be stable enough to support biodiversity, to have populations on coastlines, to have as much fresh air and stable weather as we do. And every day that we take action and do something about the problem, there's more chance that all of that good stuff is going to happen.
I think this all or nothing thinking can make us really discouraged and less likely to take concrete action, and I think everybody should feel encouraged and empowered. Every little bit makes a difference. Every step that we take takes us closer to a better place. And I am hopeful that all of these steps are going to add up so that the world keeps being a cool, awesome place full of air that we can breathe and interesting, awesome, weird animals and plants and stuff.
Natasha Jaques: Yeah, I agree with what Tegan said exactly. So, there isn't an “in time.” We've already done a lot of damage to the environment, to coral reefs. We've killed 3 billion birds in North America in the last 50 years. Animal populations have declined by 60% on average. So that's really scary. But I also see continued accelerating technological progress that could be used to help these problems, and I do think that we're pretty smart and we're pretty innovative so we could address a lot of these issues. But the problem is that if there aren't incentives to do so, then I'm not sure that we'll address them effectively.
One of my favorite examples is the issue of Freon gas and the ozone layer, right? It was fairly economically viable to replace Freon gas and still get cooling systems, and because there was regulations put in place, we made that transition fairly quickly and the hole in the ozone layer is now starting to shrink. But if we don't have the incentives in place, then I'm not sure how we'll engineer massive social change. So it is a political problem as well. We have to get people motivated and we have to get large institutions to change.
Tegan Maharaj: On that topic, one of the most surprising things that I took out of the things that I learned at our last workshop was a result from Drawdown, which shows that some of the biggest factors increasing greenhouse gases in the environment are old refrigerators and air conditioning units. So one of the most impactful things you can do as a homeowner, as an individual, is to make sure that your old refrigerator or AC unit is recycled and disposed of responsibly, so that it doesn't just end up in a landfill leaking those gases that destroy the ozone layer and increase global warming.
Ariel Conn: As you guys were working on this, were there other things that surprised you?
Natasha Jaques: So one thing that surprised me in reading — I don't know if you've seen this book from MIT Press called What We Know About Climate Change. It's really good. And that book makes a really strong case for nuclear energy; basically saying that we are overly scared of nuclear energy and with the latest technology, it's actually incredibly clean. It's a solution we sort of already have that we could be putting in place. And so, I think it surprised me how much certain climate experts are really in favor of nuclear energy, and how unwilling some politicians are to talk about it.
Tegan Maharaj: It was surprising to me to learn how much policy analysts and people in social sciences, economics, and policy are using a lot of machine learning and empirical methods — maybe under different names and maybe without making the connection to the machine learning community. So I think there's really a lot of — we're a broken record saying this — but a lot of collaboration that can happen, a lot of really productive work that could come out of that kind of collaboration.
Ariel Conn: Excellent. And then are there any final thoughts that you want to leave with listeners? Anything that you think we should have gotten into that we didn't, or that you think is really interesting for people to know or understand?
Tegan Maharaj: The main high-level ways I see ML can help with problems of climate change, or just can be applied in general, are like two families of stuff. There's descriptive stuff, where machine learning can help analyze large volumes of data, make visualizations or analysis of large amounts of data that can help people make decisions; and then there's also prescriptive stuff, where the machine learning algorithm can help decide what to do, make decisions, or predict what is going to happen as a result of complicated processes that are happening in the climate.
Natasha Jaques: I think the only thing I would like to reemphasize is that for individuals who don't have that much information on climate change and sustainability, I think not to be intimidated and to just try to do what they can, and there'll be more tools in the future.
Tegan Maharaj: I think it's really important for machine learning researchers, and people in general, to know that our society is kind of a work in progress. The way we make decisions and the things that we're doing in the world are things that we can have a huge impact on, that we can change; we can make them better. And you can apply your skills to doing that, and it's not that hard. Maybe you need to learn about some stuff and collaborate with some people and think of a new problem setting where you can apply machine learning. But we're researchers, and that's the kind of cool stuff that I like doing, anyway.
So I think the potential for new problems, new interesting applications of machine learning — that's all really exciting, and it's really great that we can make a difference in something that is very important to me. Having a better world for everybody. It's cliche, but that's what I want and I think we can do it.
Ariel Conn: Excellent. Also, the points that you've both made earlier about every little bit helps, I think that's especially important as well.
Well, thank you both so much.
Natasha Jaques: Yeah, thank you.
Tegan Maharaj: Thanks for having us on.
Ariel Conn: I hope you’ve enjoyed these episodes about how we can use machine learning to tackle climate change. On the next episode, we’ll be joined by Glen Peters who is a Research Director at the CICERO Center for International Climate Research in Oslo. Most of his research is on past, current, and future trends in energy consumption and greenhouse gas emissions.
Glen Peters: Emissions have grown rapidly, and continue to grow, and a bit of a consequence there is the relative emissions from fossil fuels keeps growing in the carbon budget. We used to think that emissions from fossil fuels were the most certain part of the carbon budget, but now that emissions are so big, the actual uncertainty is quite important.
Ariel Conn: You can catch Glen next week as he joins us for episode 18 of Not Cool, a Climate Podcast. And as always, if you’ve been enjoying these episodes, please take a moment to like them, share them and maybe even leave a good review.