AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 2)
The space of AI alignment research is highly dynamic, and it's often difficult to get a bird's eye view of the landscape. This podcast is the second of two parts attempting to partially remedy this by providing an overview of technical AI alignment efforts. In particular, this episode seeks to continue the discussion from Part 1 by going in more depth with regards to the specific approaches to AI alignment. In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter.
Topics discussed in this episode include:
- Embedded agency
- The field of "getting AI systems to do what we want"
- Ambitious value learning
- Corrigibility, including iterated amplification, debate, and factored cognition
- AI boxing and impact measures
- Robustness through verification, adverserial ML, and adverserial examples
- Interpretability research
- Comprehensive AI Services
- Rohin's relative optimism about the state of AI alignment
You can take a short (3 minute) survey to share your feedback about the podcast here.
We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.
Recommended/mentioned reading
Reframing Superintelligence: CAIS as General Intelligence
Penalizing side effects using stepwise relative reachability
Techniques for optimizing worst-case performance
Cooperative Inverse Reinforcement Learning
Deep reinforcement learning from human preferences
Supervising strong learners by amplifying weak experts
The Building Blocks of Interpretability
Good and safe uses of AI Oracles
Transcript
Lucas: Hey everyone, welcome back to the AI Alignment Podcast. I'm Lucas Perry, and today's episode is the second part of our two part series with Rohin Shah, developing an overview of technical AI alignment efforts. If you haven't listened to the first part, we highly recommend that you do, as it provides an introduction to the varying approaches discussed here. The second part is focused on exploring AI alignment methodologies in more depth, and nailing down the specifics of the approaches and lenses through which to view the problem.
In this episode, Rohin will begin by moving sequentially through the approaches discussed in the first episode. We'll start with embedded agency, then discuss the field of getting AI systems to do what we want, and we'll discuss ambitious value learning alongside this. Next, we'll move to corrigibility, in particular, iterated amplification, debate, and factored cognition.
Next we'll discuss placing limits on AI systems, things of this nature would be AI boxing and impact measures. After this we'll get into robustness which consists of verification, adversarial machine learning, and adversarial examples to name a few.
Next we'll discuss interpretability research, and finally comprehensive AI services. By listening to the first part of the series, you should have enough context for these materials in the second part. As a bit of announcement, I'd love for this podcast to be particularly useful and interesting for its listeners. So I've gone ahead and drafted a short three minute survey that you can find link to on the FLI page for this podcast, or in the description of where you might find this podcast. As always, if you find this podcast interesting or useful, please make sure to like, subscribe and follow us on your preferred listening platform.
For those of you that aren't already familiar with Rohin, he is a fifth year PhD student in computer science at UC Berkeley with the Center for Human Compatible AI working with Anca Dragan, Pieter Abbeel, and Stuart Russell. Every week he collects and summarizes recent progress relative to AI alignment in the Alignment Newsletter. With that, we're going to start off by moving sequentially through the approached just enumerated. All right. Then let's go ahead and begin with the first one, which I believe was embedded agency.
Rohin: Yeah, so embedded agency. I kind of want to just differ to the embedded agency sequence, because I'm not going to do anywhere near as good a job as that does. But the basic idea is that we would like to have this sort of theory of intelligence, and one major blocker to this is the fact that all of our current theories, most notably, the reinforcement learning make this assumption that there is a nice clean boundary between the environment and the agent. It's sort of like the agent is playing a video game, and the video game is the environment. There's no way for the environment to actually affect the agent. The agent has this defined input channel, takes actions, those actions get sent to the video game environment, the video game environment does stuff based on that and creates an observation, and that observation was then sent back to the agent who gets to look at it, and there's this very nice, clean abstraction there. The agent could be bigger than the video game, in the same way that I'm bigger than tic tac toe.
I can actually simulate the entire game tree of tic tac toe and figure out what the optimal policy for tic tac toe is. It's actually this cool XKCD that does just show you the entire game tree, it's great.
So in the same way in the video game setting, the agent can be bigger than the video game environment, in that it can have a perfectly accurate model of the environment and know exactly what its actions are going to do. So there are all of these nice assumptions that we get in video game environment land, but in real world land, these don't work. If you consider me on the Earth, I cannot have an exact model of the entire environment because the environment contains me inside of it, and there is no way that I can have a perfect model of me inside of me. That's just not a thing that can happen. Not to mention having a perfect model of the rest of the universe, but we'll leave that aside even.
There's the fact that it's not super clear what exactly my action space is. Once there is now a laptop available to me, does the laptop start talking as part of my action space? Do we only talk about motor commands I can give to my limbs? But then what happens if I suddenly get uploaded and now I just don't have any lens anymore? What happened to my actions, are they gone? So Embedded Agency broadly factors this question out into four sub problems. I associate them with colors, because that's what Scott and Abram do in their sequence. The red one is decision theory. Normally decision theory is consider all possible actions to simulate their consequences, choose the one that will lead to the highest expected utility. This is not a thing you can do when you're an embedded agent, because the environment could depend on what policy you do.
The classic example of this is Newcomb's problem where part of the environment is all powerful being, Omega. Omega is able to predict you perfectly, so it knows exactly what you're going to do, and Omega is 100% trustworthy, and all those nice simplifying assumptions. Omega provides you with the following game. He's going to put two transparent boxes in front of you. The first box will always contain $1,000 dollars, and the second box will either contain a million dollars or nothing, and you can see this because they're transparent. You're given the option to either take one of the boxes or both of the boxes, and you just get whatever's inside of them.
The catch is that Omega only puts the million dollars in the box if he predicts that you would take only the box with the million dollars in it, and not the other box. So now you see the two boxes, and you see that one box has a million dollars, and the other box has a thousand dollars. In that case, should you take both boxes? Or should you just take the box with the million dollars? So the way I've set it up right now, it's logically impossible for you to do anything besides take the million dollars, so maybe you'd say okay, I'm logically required to do this, so maybe that's not very interesting. But you can relax this to a problem where Omega is 99.999% likely to get the prediction right. Now in some sense you do have agency. You could choose both boxes and it would not be a logical impossibility, and you know, both boxes are there. You can't change the amounts that are in the boxes now. Man, you should just take both boxes because it's going to give you $1,000 more. Why would you not do that?
But I claim that the correct thing to do in this situation is to take only one box because the fact that you are the kind of agent who would only take one box is the reason that the one box has a million dollars in it anyway, and if you were the kind of agent that did not take one box, took two boxes instead, you just wouldn't have seen the million dollars there. So that's the sort of problem that comes up in embedded decision theory.
Lucas: Even though it's a thought experiment, there's a sense though in which the agent in the thought experiment is embedded in a world where he's making the observation of boxes that have a million dollars in them with genius posing these situations?
Rohin: Yeah.
Lucas: I'm just seeking clarification on the embeddedness of the agent and Newcomb's problem.
Rohin: The embeddedness is because the environment is able to predict exactly, or with close to perfect accuracy what the agent could do.
Lucas: The genie being the environment?
Rohin: Yeah, Omega is part of the environment. You've got you, the agent, and everything else, the environment, and you have to make good decision. We've only been talking about how the boundary between agent and environment isn't actually all that clear. But to the extent that it's sensible to talk about you being able to choose between actions, we want some sort of theory for how to do that when the environment can contain copies of you. So you could think of Omega as simulating a copy of you and seeing what you would do in this situation before actually presenting you with a choice.
So we've got the red decision theory, then we have yellow embedded world models. With embedded world models, the problem that you have is that, so normally in our nice video game environment, we can have an exact model of how the environment is going to respond to our actions, even if we don't know it initially, we can learn it overtime, and then once we have it, it's pretty easy to see how you could plan in order to do the optimal thing. You can sort of trial your actions, simulate them all, and then see which one does the best and do that one. This is roughly AIXI works. AIXI is the model of the optimally intelligent RL agent in this four video game environment like settings.
Once you're in embedded agency land, you cannot have an exact model of the environment because for one thing the environment contains you and you can't have an exact model of you, but also the environment is large, and you can't simulate it exactly. The big issue is that it contains you. So how you get any sort of sensible guarantees on what you can do, even though the environment can contain you is the problem off of embedded world models. You still need a world model. It can't be exact because it contains you. Maybe you could do something hierarchical where things are fuzzy at the top, but then you can go focus in on each particular levels of hierarchy in order to get more and more precise about each particular thing. Maybe this is sufficient? Not clear.
Lucas: So in terms of human beings though, we're embedded agents that are capable of creating robust world models that are able to think about AI alignment.
Rohin: Yup, but we don't know how we do it.
Lucas: Okay. Are there any sorts of understandings that we can draw from our experience?
Rohin: Oh yeah, I'm sure there are. There's a ton of work on this that I'm not that familiar with, and probably a cog psy or psychology or neuroscience, all of these fields I'm sure will have something to say about it. Hierarchical world models in particular are pretty commonly talked about as interesting. I know that there's a whole field of hierarchical reinforcement learning in AI that's motivated by this, but I believe it's also talked about in other areas of academia, and I'm sure there are other insights to be getting from there as well.
Lucas: All right, let's move on then from hierarchical world models.
Rohin: Okay. Next is blue robust delegation. So with robust delegation, the basic issue here, so we talked about Vingean reflection a little bit in the first podcast. This is a problem that falls under robust delegation. The headline difficulty under robust delegation is that the agent is able to do self improvement, it can reason about itself and do things based on that. So one way you can think of this is that instead of thinking about it as self modification, you can think about it as the agent is constructing a new agent to act at future time steps. So then in that case your agent has the problem of how do I construct an agent for future time steps such that I am happy delegating my decision making to that future agent? That's why it's called robust delegation. Vingean reflection in particular is about how can you take an AI system that uses a particular logical theory in order to make inferences and have it move to a stronger logical theory, and actually trust the stronger logical theory to only make correct inferences?
Stated this way, the problem is impossible because lots of theorems, it's a well known result in logic that a weaker theory can not prove the consistency of well even itself, but also any stronger theory as a corollary. Intuitively in this pretty simple example, we don't know how to get an agent that can trust a smarter version of itself. You should expect this problem to be hard, right? It's in some sense dual to the problem that we have of AI alignment where we're creating something smarter than us, and we need it to pursue the things we want it to pursue, but it's a lot smarter than us, so it's hard to tell what it's going to do.
So I think of this aversion of the AI alignment problem, but apply to the case of some embedded agent reasoning about itself, and making a better version of itself in the future. So I guess we can move on to the green section, which is sub system alignment. The tagline for subsystem alignment would be the embedded agent is going to be made out of parts. Its' not this sort of unified coherent object. It's got different pieces inside of it because it's embedded in the environment, and the environment is made of pieces that make up the agent, and it seems likely that your AI system is going to be made up of different cognitive sub parts, and it's not clear that those sub parts will integrate together into a unified whole such that unified whole is pursuing a goal that you like.
It could be that each individual sub part has its own goal and they're all competing with each other in order to further their own goals, and that the aggregate overall behavior is usually good for humans, at least in our current environment. But as the environment changes, which it will due to technological progression, one of the parts might just win out and be optimizing some goal that is not anywhere close to what we wanted. A more concrete example would be one way that you could imagine building a powerful AI system is to have a world model that is awarded for making accurate predictions about what the world will look like, and then you have a decision making model, which has a normal reward function that we program in, and tries to choose actions in order to maximize that reward. So now we have an agent that has two sub systems in it.
You might worry for example that once the world model gets sufficiently powerful, it starts realizing that the decision making thing is depending on my output in order to make decisions. I can trick it into making the world easier to predict. So maybe I give it some models of the world that say make everything look red, or make everything black, then you will get high reward somehow. Then if the agent actually then takes that action and makes everything black, and now everything looks black forever more, then the world model can very easily predict, yeah, no matter what action you take, the world is just going to look black. That's what the world is now, and that gets the highest possible reward. That's a somewhat weird story for what could happen. But there's no real stronger unit that says nope, this will definitely not happen.
Lucas: So in total sort of, what is the work that has been done here on inner optimizers?
Rohin: Clarifying that they could exist. I'm not sure if there has been much work on it.
Lucas: Okay. So this is our fourth cornerstone here in this embedded agency framework, correct?
Rohin: Yup, and that is the last one.
Lucas: So surmising these all together, where does that leave us?
Rohin: So I think my main takeaway is that I am much more strongly agreeing with MIRI that yup, we are confused about how intelligence works. That's probably it, that we are confused about how intelligence works.
Lucas: What is this picture that I guess is conventionally held of what intelligence is that is wrong? Or confused?
Rohin: I don't think there's a thing that's wrong about the conventional. So you could talk about a definition of intelligence, of being able to achieve arbitrary goals. I think Eliezer says something like cross domain optimization power, and I think that seems broadly fine. It's more that we don't know how intelligence is actually implemented, and I don't think we ever claim to know that, but embedded agency is like we really don't know it. You might've thought that we were making progress on figuring out how intelligence might be implemented with a classical decision theory, or the Von Neumann--Morgenstern utility theorem, or results like value of perfect information and stuff like being always non negative.
You might've thought that we were making progress on it, even if we didn't fully understand it yet, and then you read on method agency and you're like no, actually there are lots more conceptual problems that we have not even begin to touch yet. Well MIRI has begun to touch them I would say, but we really don't have good stories for how any of these things work. Classically we just don't have a description of how intelligence works. MIRI's like even the small threads of things we thought about how intelligence could work are definitely not the full picture, and there are problems with them.
Lucas: Yeah, I mean just on simple reflection, it seems to me that in terms of the more confused conception of intelligence, it sort of models it more naively as we were discussing before, like the simple agent playing a computer game with these well defined channels going into the computer game environment.
Rohin: Yeah, you could think of AIXI for example as a model of how intelligence could work theoretically. The sequence is like no, this is why I see it as not a sufficient theoretical model.
Lucas: Yeah, I definitely think that it provides an important conceptual shift. So we have these four corner stones, and it's illuminating in this way, are there any more conclusions or wrap up you'd like to do on embedded agency before we move on?
Rohin: Maybe I just want to add a disclaimer that MIRI is notoriously hard to understand and I don't think this is different for me. It's quite plausible that there is a lot of work that MIRI has done, and a lot of progress that MIRI has made that I either don't know about or know about but don't properly understand. So I know I've been saying I want to differ to people a lot, or I want to be uncertain a lot, but on MIRI I especially want to do so.
Lucas: All right, so let's move on to the next one within this list.
Rohin: The next one was doing what humans want. How do I summarize that? I read a whole sequence of posts on it. I guess the story for success, to the extent that we have one right now is something like use all of the techniques that we're developing, or at least the insights from them, if not the particular algorithms to create an AI system that behaves corrigibly. In the sense that it is trying to help us achieve our goals. You might be hopeful about this because we're creating a bunch of algorithms for it to properly infer our goals and then pursue them, so this seems like a thing that could be done. Now, I don't think we have a good story for how that happens. I think there are several open problems that show that our current algorithms are insufficient to do this. But it seems plausible that with more research we could get to something like that.
There's not really a good overall summary of the field because it's more like a bunch of people separately having a bunch of interesting ideas and insights, and I mentioned a bunch of them in the first part of the podcast already. Mostly because I'm excited about these and I've read about them recently, so I just sort of start talking about them whenever they seem even remotely relevant. But to reiterate them, there is the notion of analyzing the human AI system together as pursuing some sort of goal, or being collectively rational as opposed to having an individual AI system that is individually rational. So that's been somewhat formalized in Cooperative Inverse Reinforcement Learning. Typically with inverse reinforcement learning, so not the cooperative kind, you have a human, the human is sort of exogenous, the AI doesn't know that they exist, and the human creates a demonstration of the sort of behavior that they want the AI to do. If you're thinking about robotics, it's picking up a coffee cup, or something like this. Then the robot just sort of sees this demonstration and comes out of thin air, it's just data that it gets.
Let's say that I had executed this demonstration, what reward function would I have been optimizing? And then it figures out a reward function, and then it uses that reward function however it wants. Usually you would then use reinforcement learning to optimize that reward function and recreate the behavior. So that's normal inverse reinforcement learning. Notably in here is that you're not considering the human and the robot together as a full collective system. The human is sort of exogenous to the problem, and also notable is that the robot is sort of taking the reward to be something that it has as opposed to something that the human has.
So CIRL basically says, no, no, no, let's not model it this way. The correct thing to do is to have a two player game that's cooperative between the human and the robot, and now the human knows the reward function and is going to take actions somehow. They don't necessarily have to be demonstrations. But the human knows the reward function and will be taking actions. The robot on the other hand does not know the reward function, and it also gets to take actions, and the robot keeps a probability distribution over the reward that the human has, and updates this overtime based on what the human does.
Once you have this, you get this sort of nice, interactive behavior where the human is taking actions that teach the robot about the reward function. The robot learns the reward function over time and then starts helping the human achieve his or her goals. This sort of teaching and learning behavior comes simply under the assumption that the human and the robot are both playing the game optimally, such that the reward function gets optimized as best as possible. So you get this sort of teaching and learning behavior from the normal notion of optimizing a particular objective, just from having the objective be a thing that the human knows, but not a thing that the robot knows. One thing that, I don't know if CIRL introduced it, but it was one of the key aspects of CIRL was having probability distribution over a reward function, so you're uncertain about what reward you're optimizing.
This seems to give a bunch of nice properties. In particular, once the human starts taking actions like trying to shut down the robot, then the robot's going to think okay, if I knew the correct reward function, I would be helping the human, and given that the human is trying to turn me off, I must be wrong about the reward function, I'm not helping, so I should actually just let the human turn me off, because that's what would achieve the most reward for the human. So you no longer have this incentive to disable your shutdown button in order to keep optimizing. Now this isn't exactly right, because better than both of those option is to disable the shutdown button, stop doing whatever it is you were doing because it was clearly bad, and then just observe humans for a while until you can narrow down what their reward function actually is, and then you go and optimize that reward, and behave like a traditional goal directed agent. This sounds bad. It doesn't actually seem that bad to me under the assumption that the true reward function is a possibility that the robot is considering and has a reasonable amount of support in the prior.
Because in that case, once the AI system eventually narrows down on the reward function, it will be either the true reward function, or a reward function that's basically indistinguishable from it, because otherwise, there would be some other information that I could gather in order to distinguish between them. So you actually would get good outcomes. Now of course in practice it seems likely that we would not be able to specify the space of reward functions well enough for this to work. I'm not sure about that point. Regardless, it seems like there's been some sort of conceptual advance here about when the AI's trying to do something for the human, it doesn't have the disabling the shutdown button, the survival incentive.
So while maybe reward uncertainty is not exactly the right way to do it, it seems like you could do something analogous that doesn't have the problems that reward uncertainty does.
One other thing that's kind of in this vein, but a little bit different is the idea of an AI system that infers and follows human norms, and the reason we might be optimistic about this is because humans seem to be able to infer and follow norms pretty well. I don't think humans can infer the values that some other human is trying to pursue and then optimize them to lead to good outcomes. We can do that to some extent. Like I can infer that someone is trying to move a cabinet, and then I can go help them move that cabinet. But in terms of their long term values or something, it seems pretty hard to infer and help with those. But norms, we do in fact do infer and follow all the time. So we might think that's an easier problem, like our AI systems could do it as well.
Then the story for success is basically that with these AI systems, we are able to accelerate technological progress as before, but the AI systems behave in a relatively human like manner. They don't do really crazy things that a human wouldn't do, because that would be against our norms. As with the accelerating technological progress, we get to the point where we can colonize space, or whatever else it is you want to do with the feature. Perhaps even along the way we do enough AI alignment research to build an actual aligned superintelligence.
There are problems with this idea. Most notably if you accelerate technological progress, bad things can happen from that, and norm following AI systems would not necessarily stop that from happening. Also to the extent that if you think human society, if left to its own devices would lead to something bad happening in the future, or something catastrophic, then a norm following AI system would probably just make that worse, in that it would accelerate that disaster scenario, without really making it any better.
Lucas: AI systems in a vacuum that are simply norm following seem to have some issues, but it seems like an important tool in the toolkit of AI alignment to have AIs which are capable of modeling and following norms.
Rohin: Yup. That seems right. Definitely agree with that. I don't think I had mentioned the reference on this. So for this one I would recommend people look at Incomplete Contracting and AI Alignment I believe is the name of the paper by Dylan Hadfield-Menell, and Gillian Hadfield, or also my post about it in the Value Learning Sequence.
So far I've been talking about sort of high level conceptual things within the, 'get AI systems to do what we want.' There are also a bunch of more concrete technical approaches. It's like inverse reinforcement learning, deep reinforcement learning from human preferences, and there you basically get a bunch of comparisons of behavior from humans, and use that to infer a reward function that your agent can optimize. There's recursive reward modeling where you take the task that you are trying to do, and then you consider a new auxiliary task of evaluating your original task. So maybe if you wanted to train an AI system to write fantasy books, well if you were to give human feedback on that, it would be quite expensive because you'd have to read the entire fantasy book and then give feedback. But maybe you could instead outsource the task, even evaluating fantasy books, you could recursively apply this technique and train a bunch of agents that can summarize the plot of a book or comment on the pros of the book, or give a one page summary of the character development.
Then you can use all of these AI systems to help you give feedback on the original AI system that's trying to write a fantasy book. So that's a recursive reward modeling. I guess going a bit back into the conceptual territory, I wrote a paper recently on learning preferences from the state of the world. So the intuition there is that the AI systems that we create aren't just being created into a brand new world. They're being instantiated in a world where we have already been acting for a long time. So the world is already optimized for our preferences, and as a result, our AI systems can just look at the world and infer quite a lot about our preferences. So we gave an algorithm that did this in some poor environments.
Lucas: Right, so again, this covers the conceptual category of methodologies of AI alignment where we're trying to get AI systems to do what we want?
Rohin: Yeah, current AI systems in a sort of incremental way, without assuming general intelligence.
Lucas: And there's all these different methodologies which exist in this context. But again, this is all sort of within this other umbrella of just getting AI to do things we want them to do?
Rohin: Yeah, and you can actually compare across all of these methods on particular environments. This hasn't really been done so far, but in theory it can be done, and I'm hoping to do it at some point in the future.
Lucas: Okay. So we've discussed embedded agency, we've discussed this other category of getting AIs to do what we want them to do. Just moving forward here through diving deep on these approaches.
Rohin: I think the next one I wanted to talk about was ambitious value learning. So here the basic idea is that we're going to build a superintelligent AI system, it's going to have goals, because that's what the Von Neumann---Morgenstern theorem tells us is that anything with preferences, if they're consistent and coherent, which they should be for a superintelligent system, or at least as far as we can tell they should be consistent. Any type system has a utility function. So natural thought, why don't we just figure out what the right utility function is, and put it into the AI system?
So there's a lot of good arguments that you're not going to be able to get the one correct utility function, but I think Stuart's hope is that you can find one that is sufficiently good or adequate, and put that inside of the AI system. In order to do this, he wants to, I believe the goal is to learn the utility function by looking at both human behavior as well as the algorithm that human brains are implementing. So if you see that the human brain, when it knows that something is going to be sweet, tends to eat more of it. Then you can infer that humans like to eat sweet things. As opposed to humans really dislike eating sweet things, but they're really bad at optimizing their utility function. In this project of ambitious value learning, you also need to deal with the fact that human preferences can be inconsistent, that the AI system can manipulate the human preferences. The classic example of that would be the AI system could give you a shot of heroin, and that probably change your preferences from I do not want heroin to I do want heroin. So what does it even mean to optimize for human preferences when they can just be changed like that?
So I think the next one was corrigibility and the associated iterated amplification and debate basically. I guess factored cognition as well. To give a very quick recap, the idea with corrigibility is that we would like to build an AI system that is trying to help us, and that's the property that we should aim for as opposed to an AI system that actually helps us.
One motivation for focusing on this weaker criteria is that it seems quite difficult to create a system that knowably actually helps us, because that means that you need to have confidence that your AI system is never going to make mistakes. It seems like quite a difficult property to guarantee. In addition, if you don't make some assumption on the environment, then there's a no free lens theorem that says this is impossible. Now it's probably reasonable to put some assumption on the environment, but it's still true that your AI system could have reasonable beliefs based on past experience, and nature still throws it a curve ball, and that leads to some sort of bad outcome happening.
While we would like this to not happen, it also seems hard to avoid, and also probably not that bad. It seems like the worst outcomes come when your superintelligent system is applying all of its intelligence in pursuit of their own goal. That's the thing that we should really focus on. That conception of what we want to enforce is probably the thing that I'm most excited about. Then there are particular algorithms that are meant to create corrigible agents, assuming we have the capabilities to get general intelligence. So one of these is iterated amplification.
Iterated amplification is really more of a framework to describe particular methods of training systems. In particular, you alternate between amplification and distillation steps. You start off with an agent that we're going to assume is already aligned. So this could be a human. A human is a pretty slow agent. So the first thing we're going to do is distill the human down into a fast agent. So we could use something like imitation learning, or maybe inverse reinforcement learning plus reinforcement learning, followed by reinforcement learning or something like that in order to train a neural net or some other AI system that mostly replicates the behavior of our human, and remains aligned. By aligned maybe I mean corrigible actually. We start with a corrigible agent, and then we produce agents that continue to be corrigible.
Probably the resulting agent is going to be a little less capable than the one that you started out with just because if the best you can do is to mimic the agent that you stated with, that gives you exactly as much capabilities as that agent. So if you don't succeed at properly mimicking, then you're going to be a little less capable. Then you take this fast agent and you amplify it, such that it becomes a lot more capable, at perhaps the cost of being a lot slower to compute.
One way that you could image doing amplification would be to have a human get a top level task, and for now we'll assume that the task is question answering, so they get this top level question and they say okay, I could answer this question directly, but let me make use of this fast agent that we have from the last turn. We'll make a bunch of sub questions that seem relevant for answering the overall question, and ask our distilled agent to answer all of those sub questions, and then using those answers, the human can then make a decision for their top level question. It doesn't have to be the human. You could also have a distilled agent at the top level if you want.
I think having the human there seems more likely. So with this amplification you're basically using the agent multiple times, letting it reason for longer in order to get a better result. So the resulting human, plus many copies of the agent system is more capable than the original distilled agent, but also slower. So we started off with something, let's call it capability level five, and then we distilled it and it became capability level four, but it was a lot faster. Then we amplified it and maybe now it's capability level eight. But it's a lot slower. So we can distill it again and get something at capability level seven that's pretty fast, and then amplify it again and so on and so forth. So the hope is that this would allow us to continually train an agent that can reach arbitrary levels of capabilities that are actually physically possibly while remaining aligned or corrigible the entire time.
Lucas: Yeah, that's quite a great picture. I'm happy to jump into debate next, and then I'd like to talk about the potential relationship between the two.
Rohin: Sure. So with debate, the key hope that it is based on is that it is easier to point out a lie then it is to lie in the first place. This would be under the assumption that if you've got two agents, one agent is trying to lie, the other agent is trying to detect when the first agent is lying, you would need the second to have access to all of the information that the first agent is using to make this decision. But if that can happen, then we would hope that it is easier to detect lies than it is to actually lie. If this is actually true, we can imagine the following training process. We give a top level question, and then we have two AI systems debate what the correct answer to that top level question is. One of the agents initially says the answer is 12, then the other agent says no, the answer is 15, then they start arguing about why the answer should be 12 versus 15.
Eventually, after some amount of time, a human is asked to judge this debate and say which of the AI systems provided more true and useful information. Whichever agent the human decided was correct gets a reward of plus one, and the other one gets a reward of minus one, and is then updated presumably by a gradient decent or something. So the hope here is basically that the two AI system that are debating each other are about equally powerful, such that if any one of them tries to lie or manipulate the user in some way, the other AI system would be able to point that out, and then the human would give the reward to the AI systems that point it out. So at equilibrium you should expect that both of the agents are just trying to give you true and useful information, and they're not trying to lie, because if they do try to lie, then the other one will point it out.
Lucas: Right. So there's this important feature here where as you mentioned, the claim is that it's easier to spot out lies then to make lies, and this sort of asymmetry here is one of the motivations which says that generally it'll be easier to tell when agents are telling the truth rather than making up a lie.
Rohin: As along as you have another AI system that can point this out. Certainly a super intelligent AI system could lie to me and I wouldn't be able to tell, probably, but it's a lot harder for a superintelligent AI system to lie to me when I have another superintelligent AI system that's trying to point out lies that the first one makes.
Lucas: Right. So now I think we can go ahead and cover its relationship to iterated amplification?
Rohin: Sure. There is actually quite a close relationship between the two, even though it doesn't seem like it on first site. The hope with both of them is that your AI systems will learn to do human like reasoning, but on a much larger scale than humans can do. In particular, consider the following kind of agent. You have a human who is given a top level question that they have to answer, and that human can create a bunch of sub questions and then delegate each of those sub questions to another copy of the same human, initialized from scratch or something like that so they don't know what the top level human has thought.
Then they now have to answer the sub question, but they too can delegate to another human further down the line. And so on you can just keep delegating down until you get something that questions are so easy that the human can just straight up answer them. So I'm going to call this structure a deliberation tree, because it's a sort of tree of considerations such that every node, the answer to that node, it can be computed from the answers to the children nodes, plus a short bit of human reasoning that happened at that node.
In iterated amplification, what's basically happening is you start with leaf nodes, the human agent. There's just a human agent, and they can answer questions quickly. Then when you amplify it the first time, you get trees of depth one, where at the top level there's a human who can then delegate sub questions out, but then those sub questions have to be answered by an agent that was trained to be like a human. So you've got something that approximates depth one human deliberation trees. Then after another round of distillation and amplification, you've got human delegating to agents that were trained to mimic humans that could delegate to agents that were trained to mimic humans. An approximate version of a depth two deliberation tree.
So iterated amplification is basically just building up the depth of the tree that the agent is approximating. But we hope that these deliberation trees are always just basically implementing corrigible reasoning, and that eventually once they get deep enough, you get arbitrarily strong capabilities.
Lucas: Can you offer some clarification as to why one might expect a group of copies of an AI, plus the human to scale to be able to make sure that during distillation, that alignment is retained?
Rohin: That's an open problem. Whether you can do a distillation step that does preserve alignment/corrigibility, it's a thing that Paul in a few recent posts in the Iterated Amplification Sequence, he calls it the reward engineering problem. The hope is that if you believe that the amplified agent is corrigible, then they are going to be smarter than the agent that they are trying to train via distillation. So you can actually just use the amplified agent to create a reward signal in order to train an agent during distillation. Since the amplified agent is smarter than the agent you're distilling, you could plausibly actually create a reward function that's not easily gameable, and actually gets the AI system to do what you want. I think that's the concise lacking nuance story of why you might be optimistic about this.
Lucas: All right.
Rohin: So I'll move on to how debate is also related to this. So we talk about how iterated amplification is basically like growing the depth of deliberation trees that the agent is approximating. The human part of this is judging any one node and its children. In debate on the other hand, you can imagine the same sort of deliberation tree, although now they're more like arguments and counter arguments as opposed to considerations and counter considerations. But broadly the same thing. So imagine there's this actual debate tree of ways the debate could possibly go.
Then you could think of the AI systems as choosing a particular path in the debate tree that makes them most likely to win. The key point is that given that the entire question can be resolved by exponentially size deliberation tree, if the two AI systems are capable of competing this exponential deliberation tree, then optimal play in the debate game is to go along the path that is going to lead in your victory, even given that the other player is trying to win themselves. The relation between iterated amplification and debate is that they both want the agents to implicitly be able to compute this exponential sized deliberation tree that humans could not do, and then use humans to detect a particular part of that tree. In iterated amplification you check a parent and its children. Those nodes, you look at that one section of the debate tree, and you make sure that it looks good, and then debate you look at a particular path on the debate tree and judge whether that path is good. One critique about these methods, is it's not actually clear that an exponential sized deliberation tree is able to solve all problems that we might care about. Especially if the amount of work done at each node is pretty short, like ten minutes of a stent of a normal human.
One question that you would care about if you wanted to see if an iterated amplification could work is can these exponential sized deliberation trees actually solve hard problems? This is the factored cognition hypothesis. These deliberation trees can in fact solve arbitrarily complex tasks. And Ought is basically working on testing this hypothesis to see whether or not it's true. It's like finding the tasks, which seemed hardest to do in this decompositional way, and then seeing if teams of humans can actually figure out how to do them.
Lucas: Do you have an example of what would be one of these tasks that are difficult to decompose?
Rohin: Yeah. Take a bunch of humans who don't know differential geometry or something, and have them solve the last problem in a textbook on differential geometry. They each only get ten minutes in order to do anything. None of them can read the entire textbook. Because that takes way more than ten minutes. I believe Ought is maybe not looking into that one in particular, that one sounds extremely hard, but they might be doing similar things with books of literature. Like trying to answer questions about a book that no one has actually read.
But I remember that Andreas was actually talking about this particular problem that I mentioned as well. I don't know if they actually decided to do it.
Lucas: Right. So I mean just generally in this area here, it seems like there are these interesting open questions and considerations about I guess just the general epistemic efficacy of debate. And how good AI and human systems will be at debate, and again also as you just pointed out, whether or not arbitrarily difficult tasks can be solved through this decompositional process. I mean obviously we do have proofs for much simpler things. Why is there a question as to whether or not it would scale? How would it eventually break?
Rohin: With iterated amplification in particular, if you're starting with humans who have only ten minutes to look at resources and come up with an answer, the particular thing I would say they might not be able to do is take a math textbook that the human did not know already and solve the final problem in it. Iterated amplification, to the extent that it starts with a human who's limited to ten minutes, and whenever it does a decomposition, does it with a human who's limited to ten minutes, would inherit all of the same problems if those problems are real problems.
Lucas: Cool, that's interesting. I don't have anything to add here then. So happy to keep on moving.
Rohin: I think I want to talk about limited AI systems next. So with AI boxing, the hope is to create a box around an AI system that prevents anything from inside the box from going out, and anything from outside the box from going in, except a particular input, output channels that we explicitly allow. So the hope here is that this allows us to make assumptions about the input, output channels that actually will hold in practice, and can be used to enter that AI's function more like tool AIs or something, rather than agential AIs that will do bad things.
I think most literature on boxing has had an AI system in a box, and a human operator outside of it. One interesting proposal I've heard recently is to have the human also be inside the box, because if you've got information pulling from an AI to a human, that's a pretty big hole in your box. But if the human is actually inside the box, then there's not as much that can be done. There's not as much information flowing into or out of the box.
Lucas: What does it actually mean to have the human locked in the box?
Rohin: One proposal would be have a human go inside a secured room from which no information an enter or leave, apart from the fact of whether or not the human has entered or left the room. The human can then interact with the AI as normal. As soon as the human leaves the room, the AI's reward function is zero. Basically just doesn't care about anything that happens in the future after that. So in this setting you would hope that the AI system has no incentive to affect the outside world because the only way it can affect the outside world is by sending the human outside. But as soon as it sends the human outside, its reward gets set to zero forever, so it doesn't care about any of that.
Lucas: But wouldn't it care before the human went out?
Rohin: Yes, but before the human goes out, there is nothing it can do to influence the outside world because both the human and the AI are in the box. It could influence the human while the human is in the box. But the human also can't affect the outside world while it's in the box.
Lucas: Yeah, but the human leaves the box and then can affect the world.
Rohin: Right. But any change to the world that happens after the human leaves, doesn't make it into the reward function.
Lucas: But it does because the AI system would be planning over time from its initial moment where the reward isn't set to zero.
Rohin: But your reward can be limited to the next K times steps, or something like this. Basically you're limited to until the point that the human leaves. That's the proposal here.
Lucas: I see, okay. That makes sense.
Rohin: The only information leaving the box, assuming security is a box in all the normal ways, is whether or not the human has chosen to leave. Pretty low information, doesn't seem like an AI system would be able to use that to influence the outside world in some ridiculous way. So that's boxing.
The next thing on my list for limited AGI is impact measures of side effects. There is also mild optimization and conservative concepts, but let's start with impact measures. The basic hope is to create some quantification of how much impact a particular action that the AI chooses, has on the world, and to then penalize the AI for having a lot of impact so that it only does low impact things, which presumably will not cause catastrophe. One approach to this relative reachability. With relative reachability, you're basically trying to not decrease the number of states that you can reach from the current state. So you're trying to preserve option value. You're trying to keep the same states reachable.
It's not okay for you to make one state unreachable as long as you make a different state reachable. You need all of the states that were previously reachable to continue being reachable. The relative part is that the penalty is calculated relative to a baseline that measures what would've happened if the AI had done nothing, although there are other possible baselines you could use. The reason you do this is so that we don't penalize the agent for side affects that happen in the environment. Like maybe I eat a sandwich, and now these states where there's a sandwich in front of me are no longer accessible because I can't un-eat a sandwich. We don't want to penalize our AI system for that impact, because then it'll try to stop me from eating a sandwich. We want to isolate the impact of the agent as opposed to impact that were happening in the environment anyway. So that's what we need the relative part.
There is also attainable utility preservation from Alex Turner, which makes two major changes from relative reachability. First, instead of talking about reachability of states, it talk about how much you can achieve different utility functions. So if previously you were able to make lots of paperclips, then you want to make sure that you can still make lots of paperclips. If previously you were able to travel across the world within a day, then you want to still be able to travel across the world in a day. So that's the first change I would make.
The second change is not only does it penalize decreases in attainable utility, it also penalizes increase in attainable utility. So if previously you could not mine asteroids in order to get their natural resources, you should still not be able to mine asteroids and get their resources. This seems kind of crazy when you first hear it, but the rational for it is that all of the convergent instrumental sub goals are about increases in power of your AI system. For example, for a broad range of utility functions, it is useful to get a lot of resources and a lot of power in order to achieve those utility functions. Well, if you penalize increases in attainable utility, then you're going to penalize actions that just broadly get more resources, because those are helpful for many, many, many different utility functions.
Similarly, if you were going to be shutdown, but then you disable the shutdown button, well that just makes it much more possible for you to achieve pretty much every utility, because instead of being off, you are still on and can take actions. So that also will get heavily penalized because it led to such a large increase in attainable utilities. So those are I think the two main impact measures that I know of.
Okay, we're getting to the things where I have less things to say about them, but now we're at robustness. I mentioned this before, but there are two main challenges with verification. There's the specification problem, making it computationally efficient, and all of the work is on the computationally efficient side, but I think the hardest part is the specification side, and I'd like to see more people do work on that.
I don't think anyone is really working on verification with an eye to how to apply it to powerful AI systems. I might be wrong about that. Like I know something people who do care about AI safety who are working on verification, and it's possible that they have thoughts about this that aren't published and that I haven't talked to them about. But the main thing I would want to see is what specifications can we actually give to our verification sub routines. At first glance, this is just the full problem of AI safety. We can't just give a specification for what we want to an AGI.
What specifications can we get for a verification that's going to increase our trust in the AI system. For adversarial training, again, all of the work done so far is in the adversarial example space where you try to frame an image classifier to be more robust to adversarial examples, and this kind of work sometimes, but doesn't work great. For both verification and adversarial training, Paul Christiano has written a few blog posts about how you can apply this to advance AI systems, but I don't know if anyone actively working on these with AGI in mind. With adversarial examples, there is too much work for me to summarize.
The thing that I find interesting about adversarial examples is that is shows that are we no able to create image classifiers that have learned human preferences. Humans have preferences over how we classify images, and we didn't succeed at that.
Lucas: That's funny.
Rohin: I can't take credit for that framing, that one was due to Ian Goodfellow. But yeah, I see adversarial examples as contributing to a theory of deep learning that tells us how do we get deep learning systems to be closer to what we want them to be rather than these weird things that classify pandas as givens, even when they're very clearly still pandas.
Lucas: Yeah, the framing's pretty funny, and makes me feel kind of pessimistic.
Rohin: Maybe if I wanted to inject some optimism back in, there's a frame under which an adversarial examples happen because our data sets are too small or something. We have some pretty large data sets, but humans do see more and get far richer information than just pixel inputs. We can go feel a chair and build 3D models of a chair through touch in addition to sight. There is actually a lot more information that humans have, and it's possible that what we need as AI systems is just to have way more information, and are good to narrow it down on the right model.
So let us move on to I think the next thing is interpretability, which I also do not have much to say about, mostly because there is tons and tons of technical research on interpretability, and there is not much on interpretability from an AI alignment perspective. One thing to note with interpretability is you do want to be very careful about how you apply it. If you have a feedback cycle where you're like I built an AI system, I'm going to use interpretability to check whether it's good, and then you're like oh shit, this AI system was bad, it was not making decisions for the right reasons, and then you go and fix your AI system, and then you throw interpretability at it again, and then you're like oh, no, it's still bad because of this other reason. If you do this often enough, basically what's happening is you're training your AI system to no longer have failures that are obvious to interpretability, and instead you have failures that are not obvious to interpretability, which will probably exist because your AI system seems to have been full of failures anyway.
So I would be pretty pessimistic about the system that interpretability found 10 or 20 different errors in. I would just expect that the resulting AI system has other failure modes that we were not able to uncover with interpretability, and those will at some point trigger and cause bad outcomes.
Lucas: Right, so interpretability will cover things such as super human intelligence interpretability, but also more mundane examples of present day systems correct, where the interpretability of say neural networks is basically, my understand is nowhere right now.
Rohin: Yeah, that's basically right. There have been some techniques developed like sailiency maps, feature visualization, neural net models that hallucinate explanations post hoc, people have tried a bunch of things. None of them seem especially good, though some of them definitely are giving you more insight than you had before.
So I think that only leaves CAIS. With comprehensive AI service, it's like a forecast for how AI will develop in the future. It also has some prescriptive aspects to it, like yeah, we should probably not do these things, because these don't seem very safe, and we can do these other things instead. In particular, CAIS takes a strong stance AGI agents that are God-like fully integrated systems that are optimizing some utility function over the long term future.
It should be noted that it's arguing against a very specific kind of AGI agent. This sort of long term expected utility maximizer that's fully integrated and is okay to black box, can be broken down into modular components. That entire cluster of features, it's what CAIS is talking about when it says AGI agent. So it takes a strong sense against that, saying A, it's not likely that this is the first superintelligent thing that we built, and B, it's clearly dangerous. That's what we've been saying the entire time. So here's a solution, why don't we just not build it? And we'll build these other things instead? As for what the other things are, the basic intuition pump here is that if you look at how AI is developed today, there is a bunch of research in development practices that we do. We try out a bunch of models, we try some different ways to clean our data, we try different ways of collecting data sets, and we try different algorithms and so on and so forth, and these research and development practices allow us to create better and better AI systems.
Now, our AI systems currently are also very bounded in their tasks that they do. There are specific tasks, and they do that task and that task alone, they do it in episodic ways. They are only trying to optimize over a bounded amount of time, they use a bounded amount of computation and other resources. So that's what we're going to call a service. It's an AI system that does a bounded task, in bounded time, with bounded computation. Everything is bounded. Now our research and development practices are themselves bound to tasks, and AI has shown itself to be quite good at automating bounded tasks. We've definitely not automated all bounded tasks yet, but it does seem like we are in general are pretty good at automating bounded tasks with enough effort. So probably we will also automate research and development tasks.
We're seeing some of this already with neural architecture search for example, and once AI R and D processes have been sufficiently automated, then we get this cycle where AI systems are doing the research and development needed to improve AI systems, and so we get to this point of recursive improvement that's not self improvement anymore, because there's not really an agentic itself to improve, but you do have recursive AI improving AI. So this can lead to the sort of very quick improvement and capabilities that we often associate with superintelligence. With that we can eventually get to a situation where any task that we care about, we could have a service that breaks that task down into a bunch of simple, automatable bounded tasks, and then we can create services that do each of those bounded tasks and interact with each other in order to in tandem complete the long term task.
This is how humans do engineering and building things. We have these research and development things, we have these modular systems that are interacting with each other via a well defined channel, so this seems more likely to be the firs thing that we build that's capable of super intelligent reasoning rather than an AGI agent that's optimizing the utility function of a long term, yada, yada, yada.
Lucas: Is there no risk? Because the superintelligence is the distributed network collaborating. So is there no risk for the collective distributed network to create some sort of epiphenomenal optimization effects?
Rohin: Yup, that's definitely a thing that you should worry about. I know that Erik agrees with me on this because he explicitly lists this out in the tech report as a thing that needs more research and that we should be worried about. But the hope is that there are other things that you can do that normally we wouldn't think about with technical AI safety research that would make more sense in this context. For example, we could train a predictive model of human approval. Given any scenario, the AI system should predict how much humans are going to like it or approve of it, and then that service can be used in order to check that other services are doing reasonable things.
Similarly, we might look at each individual service and see which of the other services it's accessing, and then make sure that those are reasonable services. If we see a CEO of paper clip company going and talking to the synthetic biology service, we might be a bit suspicious and be like why is this happening? And then we can go and check to see why exactly that has happened. So there are all of these other things that we could do in this world, which aren't really options in the AGI agent world.
Lucas: Aren't they options in the AGI agential world where the architectures are done such that these important decision points are analyzable to the same degree as they would be in a CAIS framework?
Rohin: Not to my knowledge. As far as I can tell, most end to end train things, you might have the architectures be such that there are these points at which you expect that certain kinds of information will be flowing there, but you can't easily look at the information that's actually there and deduce what the system is doing. It's just not interpretable enough to do that.
Lucas: Okay. I don't think that I have any other questions or interesting points with regards to CAIS. It's a very different and interesting conception of the kind of AI world that we can create. It seems to require its own new coordination challenge as if your hypothesis is true and that the agential AIs will be afforded more causal power in the world, and more efficiency than sort of the CAIS systems, that'll give them a competitive advantage that will potentially bias civilization away from CAIS systems.
Rohin: I do want to note that I think the agential AI systems will be more expensive and take longer to develop than CAIS. So I do think CAIS will come first. Again, this is all in a particular world view.
Lucas: Maybe this might be abstracting too large, but does CAIS claim to function as an AI alignment methodology to be used on the long term? Do we retain the sort of CAIS architecture path, CAIS creating super intelligence or some sort of distributed task force?
Rohin: I'm not actually sure. There's definitely a few chapters in the technical report that are like okay, what if we build AGI agents? How could we make sure that goes well? As long as CAIS comes before AGI systems, here's what we can do in that setting.
But I feel like I personally think that AGI systems will come. My guess is that Erik does not think that this is necessary, and we could actually just have CAIS systems forever. I don't really have a model for when to expect AGI separately of the CAIS world. I guess I have a few different potential scenarios that I can consider, and I can compare it to each of those, but it's not like it's CAIS and not CAIS. It's more like it's CAIS and a whole bunch of other potential scenarios, and in reality it'll be some mixture of all of them.
Lucas: Okay, that makes more sense. So, there's sort of an overload here, or just a ton of awesome information with regards to all of these different methodologies and conceptions here. So just looking at all of it, how do you feel about all of these different methodologies in general, and how does AI alignment look to you right now?
Rohin: Pretty optimistic about AI alignment, but I don't think that's so much from the particular technical safety research that we have. That's some of it. I do think that there are promising approaches, and the fact that there are promising approaches makes me more optimistic. But I think more so my optimism comes from the strategic picture. A belief that A, that we will be able to convince people that this is important, such that people start actually focusing on this problem more broadly, and B that we would be able to get a bunch of people to coordinate such that they're more likely to invest in safety. C, that I don't place as much weight on the AI systems that are at long term, utility maximizers, and therefor we're basically all screwed, which seems to be the position of many other people in the field.
I say optimistic. I mean optimistic relative to them. I'm probably pessimistic relative to the average person.
Lucas: A lot of these methodologies are new. Do you have any sort of broad view about how the field is progressing?
Rohin: Not a great one. Mostly because I would consider myself, maybe I've just recently stopped being new to the field, so I didn't really get to observe the field very much in the past, but it seems like there's been more of a shift towards figuring out how all of the things people were thinking about apply to real machine learning systems, which seems nice. The fact that it does connect is good. I don't think the connections are super natural, or they just sort of clicked, but they did mostly work out. I'd say in many cases, and that seems pretty good. So yeah, the fact that we're now doing a combination of theoretical, experimental, and conceptual work seems good.
It's no longer the case that we're mostly doing theory. That seems probably good.
Lucas: You've mentioned already a lot of really great links in this podcast, places people can go to learn more about these specific approaches and papers and strategies. And one place that is just generally great for people to go is to the Alignment Forum, where a lot of this information already exists. So are there just generally in other places that you recommend people check out if they're interested in taking more technical deep dives?
Rohin: Probably actually at this point, one of the best places for a technical deep dive is the alignment newsletter database. I write a newsletter every week about AI alignment, all the stuff that's happened in the past week, that's the alignment newsletter, not the database, which also people can sign up for, but that's not really a thing for technical deep dives. It's more a thing for keeping a pace with developments in the field. But in addition, everything that ever goes into the newsletter is also kept in a separate database. I say database, it's basically a Google sheets spreadsheet. So if you want to do a technical deep dive on any particular area, you can just go, look for the right category on the spreadsheet, and then just look at all the papers there, and read some or all of them.
Lucas: Yeah, so thanks so much for coming on the podcast Rohin, it was a pleasure to have you, and I really learned a lot and found it to be super valuable. So yeah, thanks again.
Rohin: Yeah, thanks for having me. It was great to be on here.
Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We'll be back again soon with another episode in the AI alignment series.