Skip to content
All Podcast Episodes

FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O’Keefe

Published
28 February, 2020

As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O'Keefe joins us on this episode of the FLI Podcast for a conversation about the Windfall Clause, a mechanism that attempts to ensure the abundance and wealth created by transformative AI benefits humanity globally.

Topics discussed in this episode include:

  • What the Windfall Clause is and how it might function
  • The need for such a mechanism given AGI generated economic windfall
  • Problems the Windfall Clause would help to remedy 
  • The mechanism for distributing windfall profit and the function for defining such profit
  • The legal permissibility of the Windfall Clause 
  • Objections and alternatives to the Windfall Clause

Timestamps: 

0:00 Intro

2:13 What is the Windfall Clause? 

4:51 Why do we need a Windfall Clause? 

06:01 When we might reach windfall profit and what that profit looks like

08:01 Motivations for the Windfall Clause and its ability to help with job loss

11:51 How the Windfall Clause improves allocation of economic windfall 

16:22 The Windfall Clause assisting in a smooth transition to advanced AI systems

18:45 The Windfall Clause as assisting with general norm setting

20:26 The Windfall Clause as serving AI firms by generating goodwill, improving employee relations, and reducing political risk

23:02 The mechanism for distributing windfall profit and desiderata for guiding it’s formation 

25:03 The windfall function and desiderata for guiding it’s formation 

26:56 How the Windfall Clause is different from being a new taxation scheme

30:20 Developing the mechanism for distributing the windfall 

32:56 The legal permissibility of the Windfall Clause in the United States

40:57 The legal permissibility of the Windfall Clause in China and the Cayman Islands

43:28 Historical precedents for the Windfall Clause

44:45 Objections to the Windfall Clause

57:54 Alternatives to the Windfall Clause

01:02:51 Final thoughts

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloud, iTunes, Google Play and Stitcher.

Transcript

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Cullen O’Keefe about a recent report he was the lead author on called The Windfall Clause: Distributing the Benefits of AI for the Common Good. For some quick background, the agricultural and industrial revolutions unlocked new degrees and kinds of abundance, and so too should the intelligence revolution currently underway. Developing powerful forms of AI will likely unlock levels of abundance never before seen, and this comes with the opportunity of using such wealth in service of the common good of all humanity and life on Earth but also with the risks of increasingly concentrated power and resources in the hands of the few who wield AI technologies. This conversation is about one possible mechanism, the Windfall Clause, which attempts to ensure that the abundance and wealth likely to be created by transformative AI systems benefits humanity globally.

For those not familiar with Cullen, Cullen is a policy researcher interested in improving the governance of artificial intelligence using the principles of Effective Altruism.  He currently works as a Research Scientist in Policy at OpenAI and is also a Research Affiliate with the Centre for the Governance of AI at the Future of Humanity Institute.

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/donate. You can also follow us on your preferred listening platform, like on Apple Podcasts or Spotify, by searching for us directly or following the links on the page for this podcast found in the description.

And with that, here is Cullen O’Keefe on the Windfall Clause.

We're here today to discuss this recent paper, that you were the lead author on called the Windfall Clause: Distributing the Benefits of AI for the Common Good. Now, there's a lot there in the title, so we can start of pretty simply here with, what is the Windfall Clause and how does it serve the mission of distributing the benefits of AI for the common good?

Cullen O'Keefe: So the Windfall Clause is a contractual commitment AI developers can make, that basically stipulates that if they achieve windfall profits from AI, that they will donate some percentage of that to causes that benefit everyone.

Lucas Perry: What does it mean to achieve windfall profits?

Cullen O'Keefe: The answer that we give is that when a firm's profits grow in excess of 1% of gross world product, which is just the sum of all countries GDP, then that firm has hit windfall profits. We use this slightly weird measurement of profits is a percentage of gross world product, just to try to convey the notion that the thing that's relevant here is not necessarily the size of profits, but really the relative size of profits, relative to the global economy.

Lucas Perry: Right. And so an important background framing and assumption here seems to be the credence that one may have in transformative AI or in artificial general intelligence or in superintelligence, creating previously unattainable levels of wealth and value and prosperity. I believe that in terms of Nick Bostrom's Superintelligence, this work in particular is striving to serve the common good principal, that superintelligence or AGI should be created in the service of and the pursuit of the common good of all of humanity and life on Earth. Is there anything here that you could add about the background to the inspiration around developing the Windfall Clause.

Cullen O'Keefe: Yeah. That's exactly right. The phrase Windfall Clause actually comes from Bostrom's book. Basically, the idea was something that people inside of FHI were excited about for a while, but really hadn't done anything with because of some legal uncertainties. Basically, the fiduciary duty question that I examined in the third section of the report. When I was an intern there in the summer of 2018, I was asked to do some legal research on this, and ran away with it from there. My legal research pretty convincingly showed that it should be legal as a matter of corporate law, for a corporation to enter in to such a contract. In fact, I don't think it's a particularly hard case. I think it looks like things that operations do a lot already. And I think some of the bigger questions were around the implications and design of the Windfall Clause, which is also addressed in the report.

Lucas Perry: So, we have this common good principal, which serves as the moral and ethical foundation. And then the Windfall Clause it seems, is an attempt at a particular policy solution for AGI and superintelligence, serving the common good. With this background, could you expand a little bit more on why is that we need a Windfall Clause?

Cullen O'Keefe: I guess I wouldn't say that we need a Windfall Clause. The Windfall Clause might be one mechanism that would solve some of these problems. The primary way in which cutting edge AI is being develop is currently in private companies. And the way that private companies are structured is perhaps not maximally conducive to the common good principal. This is not due to corporate greed or anything like that. It's more just a function of the roles of corporations in our society, which is that they're primarily vehicles for generating returns to investors. One might think that those tools that we currently have for taking some of the returns that are generated for investors and making sure that they're distributed in a more equitable and fair way, are inadequate in the face of AGI. And so that's kind of the motivation for the Windfall Clause.

Lucas Perry: Maybe if you could speak a little bit to the surveys of researchers of credence's and estimates about when we might get certain kinds of AI. And then what windfall in the context of an AGI world actually means.

Cullen O'Keefe: The surveys of AGI timelines, I think this is an area with high uncertainty. We cite Katja Grace's survey of AI experts, which is a few years old at this point. I believe that the median timeline that AI experts gave in that was somewhere around 2060, of attaining AGI as defined in a specific way by that paper. I don't have opinions on whether that timeline is realistic or unrealistic. We just take it as a baseline, as the best specific timeline that has at least some evidence behind it. And what was the second question?

Lucas Perry: What other degrees of wealth might be brought about via transformative AI.

Cullen O'Keefe: The short and unsatisfying answer to this, is that we don't really know. I think that the amount of economic literature really focusing on AGI in particular is pretty minimal. Some more research on this would be really valuable. A company earning profits that are defined as windfall via the report, would be pretty unprecedented in history, so it's a very hard situation to imagine. Forecasts about the way that AI will contribute to growth are pretty variable. I think we don't really have a good idea of what that might mean. And I think especially because the interface between economists and people thinking about AGI has been pretty minimal. A lot of the thinking has been more focused on more mainstream issues. If the strongest version of AGI were to come, the economic gains could be pretty huge. There's a lot on the line that circumstance.

Part of what motivated the Windfall Clause, is trying to think of mechanisms that could withstand this uncertainty about what the actual economics of AGI will be like. And that's kind of what the contingent commitment and progressively scaling commitment of the Windfall Clause is supposed to accomplish.

Lucas Perry: All right. So, now I'm going to explore here some of these other motivations that you've written in your report. There is the need to address loss of job opportunities. The need to improve the allocation of economic windfall, which if we didn't do anything right now, there would actually be no way of doing that other than whatever system of taxes we would have around that time. There's also this need to smooth the transition to advanced AI. And then there is this general norm setting strategy here, which I guess is an attempt to imbue and instantiate a kind of benevolent ethics based on the common good principle. Let's start of by hitting on addressing the loss of job opportunities. How might transformative AI lead to the loss of job opportunities and how does the Windfall Clause help to remedy that?

Cullen O'Keefe: So I want to start of with a couple of caveats. So number one, I'm not an economist. Second is, I'm very wary of promoting Luddite views. It's definitely true that in the past, technological innovation has been pretty universally positive in the long run, notwithstanding short term problems with transitions. So, it's definitely by no means inevitable that advances in AI will lead to joblessness or decreased earnings. That said, I do find it pretty hard to imagine a scenario in which we achieve very general purpose AI systems, like AGI. And there are still bountiful opportunities for human employment. I think there might be some jobs which have human only employment or something like that. It's kind of unclear, in an economy with AGI or something else resembling it, why there would be a demand for humans. There might be jobs I guess, in which people are inherently uncomfortable having non-humans. Good examples of this would be priests or clergy, probably most religions will not want to automate their clergy.

I'm not a theologian, so I can't speak to the proper theology of that, but that's just my intuition. People also mentioned things like psychiatrists, counselors, teachers, child care, stuff like that. That doesn't look as automatable. And then the human meaning aspect of this, John Danaher, philosopher, recently released a book called Automation and Utopia, talking about how for most people work is the primary source of meaning. It's certainly what they do with the great plurality of their waking hours. And I think for people like me and you, we're lucky enough to like our jobs a lot, but for many people work is mostly a source of drudgery. Often unpleasant, unsafe, etcetera. But if we find ourselves in world in which work is largely automated, not only will we have to deal with the economic issues relating to how people who can no longer offer skills for compensation, will feed themselves and their families. But also how they'll find meaning in life.

Lucas Perry: Right. If the category and meaning of jobs changes or is gone altogether, the Windfall Clause is also there to help meet fundamental universal basic human needs, and then also can potentially have some impact on this question of value and meaning. If the Windfall Clause allows you to have access to hobbies and nice vacations and other things that give human beings meaning.

Cullen O'Keefe: Yeah. I would hope so. It's not a problem that we explicitly address in the paper. I think this is kind of in the broader category of what to actually do with the windfall, once it's donated. You can think of this as like the bottom of the funnel. Whereas the Windfall Clause report is more focused at the top of the funnel, getting companies to actually commit to such a thing. And I think there's a huge rich area of work to think about, what do we actually do with the surplus from AGI, once it manifests. And assuming that we can get it in to the coffers of a public minded organization. It's something that I'm lucky enough to think about in my current job at OpenAI. So yeah, making sure that both material needs and psychological higher needs are taken care of. That's not something I have great answers for yet.

Lucas Perry: So, moving on here to the second point. We also need a Windfall Clause or function or mechanism, in order to improve the allocation of economic windfall. So, could you explain that one?

Cullen O'Keefe: You can imagine a world in which employment kind of looks the same as it is today. Most people have jobs, but a lot of the gains are going to a very small group of people, namely shareholders. I think this is still a pretty sub-optimal world. There are diminishing returns on money for happiness. So all else equal and ignoring incentive effects, progressively distributing money seems better than not. Primarily firms looking to develop the AI are based in a small set of countries. In fact, within those countries, the group of people who are heavily invested in those companies is even smaller. And so in a world, even where employment opportunities for the masses are pretty normal, we could still expect to see pretty concentrated accrual of benefits, both within nations, but I think also very importantly, across nations. This seems pretty important to address and the Windfall Clause aims to do just that.

Lucas Perry: A bit of speculation here, but we could have had a kind of Windfall Clause for the industrial revolution, which probably would have made much of the world better off and there wouldn't be such unequal concentrations of wealth in the present world.

Cullen O'Keefe: Yeah. I think that's right. I think there's sort of a Rawlsian or Harsanyian motivation there, that if we didn't know whether we would be in an industrial country or a country that is later to develop, we would probably want to set up a system that has a more equal distribution of economic gains than the one that we have today.

Lucas Perry: Yeah. By Rawlsian, you meant the Rawls' veil of ignorance, and then what was the other one you said?

Cullen O'Keefe: Harsanyi is another philosopher who is associated with the veil of ignorance idea and he argues, I think pretty forcefully, that actually the agreement that you would come to behind the veil of ignorance, is one that maximizes expected utility, just due to classic axioms of rationality. What you would actually want to do is maximize expected utility, whereas John Rawls has this idea that you would want to maximize the lot of the worst off, which Harsanyi argues doesn't really follow from the veil of ignorance, and decision theoretic best practices.

Lucas Perry: I think that the veil of ignorance, which for listeners who don't know what that is, it's if you can imagine yourself not knowing how you were going to be born as in the world. You should make ethical and political and moral and social systems, with that view in mind. And if you do that, you will pretty honestly and wholesomely come up with something to your best ability, that is good for everyone. From behind that veil of ignorance, of knowing who you might be in the world, you can produce good ethical systems. Now this is relevant to the Windfall Clause, because going through your paper, there's the tension between arguing that this is actually something that is legally permissible and that institutions and companies would want to adopt, which is in clear tension with maximizing profits for shareholders and the people with wealth and power in those companies. And so there's this fundamental tension behind the Windfall Clause, between the incentives of those with power to maintain and hold on to the power and wealth, and the very strong and important ethical and normative views and compunctions, that say that this ought to be distributed to the welfare and wellbeing of all sentient beings across the planet.

Cullen O'Keefe: I think that's exactly right. I think part of why I and others at the Future of Humanity Institute were interested in this project, is that we know a lot of people working in AI at all levels. And I think a lot of them do want to do the genuinely good thing. But feel the constraints of economics but also of fiduciary duties. We didn't have any particular insights in to that with this piece, but I think part of the motivation is just that we want to put resources out there for any socially conscious AI developers to say, "We want to make this commitment and we feel very legally safe doing so," for the reasons that I lay out.

It's a separate question whether it's actually in their economic interest to do that or not. But at least we think they have the legal power to do so.

Lucas Perry: Okay. So maybe we can get in to and explore the ethical aspect of this more. I think we're very lucky to have people like you and your fellow colleagues who have the ethical compunction to follow through and be committed to something like this. But for the people that don't have that, I'm interested in discussing more later about what to do with them. So, in terms of more of the motivations here, the Windfall Clause is also motivated by this need for a smooth transition to transformative AI or AGI or superintelligence or advanced AI. So what does that mean?

Cullen O'Keefe: As I mentioned, it looks like economic growth from AI will probably be a good thing if we manage to avoid existential and catastrophic risks. That's almost tautological I suppose. But just as in the industrial revolution where you had a huge spur of economic growth, but also a lot of turbulence. So part of the idea of the Windfall Clause is basically to funnel some of that growth in to a sort of insurance scheme that can help make that transition smoother. An un-smooth transition would be something like a lot of countries are worried they're not going to see any appreciable benefit from AI and indeed, might lose out a lot because a lot of their industries would be off shored or re-shored and a lot of their people would no longer be economically competitive for jobs. So, that's the kind of stability that I think we're worried about. And the Windfall Clause is basically just a way of saying, you're all going to gain significantly from this advance. Everyone has a stake in making this transition go well.

Lucas Perry: Right. So I mean there's a spectrum here and on one end of the spectrum there is say a private AI lab or company or actor, who is able to reach AGI or transformative AI first and who can muster or occupy some significant portion of the world GDP. That could be anywhere from one to 99 percent. And there could or could not be mechanisms in place for distributing that to the citizens of the globe. And so one can imagine, as power is increasingly concentrated in the hands of the few, that there could be quite a massive amount of civil unrest and problems. It could create very significant turbulence in the world, right?

Cullen O'Keefe: Yeah. Exactly. And it's our hypothesis that having credible mechanisms ex-ante to make sure that approximately everyone gains from this, will make people and countries less likely to take destabilizing actions. It's also a public good of sorts. You would expect that it would be in everyone's interest for this to happen, but it's never individually rational to commit that much to making it happen. Which is why it's a traditional role for governments and for philanthropy to provide those sort of public goods.

Lucas Perry: So that last point here then on the motivations for why we need a Windfall Clause, would be general norm setting. So what do you have to say about general norm setting?

Cullen O'Keefe: This one is definitely a little more vague than some of the others. But if you think about what type of organization you would like to see develop AGI, it seems like one that has some legal commitment to sharing those benefits broadly is probably correlated with good outcomes. And in that sense, it's useful to be able to distinguish between organizations that are credibly committed to that sort of benefit, from ones that say they want that sort of broad benefit but are not necessarily committed to making it happen. And so in the Windfall Clause report, we are basically trying to say, it's very important to take norms about the development of AI seriously. One of the norms that we're trying to develop is the common good principal. And even better is when you and develop those norms through high cost or high signal value mechanisms. And if we're right that a Windfall Clause can be made binding, then the Windfall Clause is exactly one of them. It's a pretty credible way for an AI developer to demonstrate their commitment to the common good principal and also show that they're worthy of taking on this huge task of developing AGI.

The Windfall Clause makes the performance or adherence to the common good principal a testable hypothesis. It's sets kind of a base line against which commitments to the common good principal can be measured.

Lucas Perry: Now there are also here in your paper, firm motivations. So, incentives for adopting a Windfall Clause from the perspective of AI labs or AI companies, or private institutions which may develop AGI or transformative AI. And your three points here for firm motivations are that it can generate general goodwill. It can improve employee relations and it could reduce political risk. Could you hit on each of these here for why firms might be willing to adopt the Windfall Clause?

Cullen O'Keefe: Yeah. So just as a general note, we do see private corporations giving money to charity and doing other pro-social actions that are beyond their legal obligations, so nothing here is particularly new. Instead, it's just applying traditional explanations for why companies engage in, what's sometimes called corporate social responsibility or CSR. And see whether that's a plausible explanation for why they might be amenable to a Windfall Clause. The first one that we mentioned in the report, is just generating general goodwill, and I think it's plausible that companies will want to sign a Windfall Clause because it brings some sort of reputational benefit with consumers or other intermediary businesses.

The second one we talk about is managing employee relationships. In general, we see that tech employees have had a lot of power to shape the behavior of their employers. Fellow FLI podcast guest Haydn Belfield just wrote a great paper, saying AI specifically. Tech talent is in very high demand and therefore they have a lot of bargaining power over what their firms do and I think it's potentially very promising that tech employers lobby for commitments like the Windfall Clause.

The third is termed in a lot of legal and investment circles, as political risk, so that's basically the risk of governments or activists doing things that hurt you, such as tighter regulation or expropriation, taxation, things like that. And corporate social responsibility, including philanthropy, is just a very common way for firms to manage that. And could be the case for AI firms as well.

Lucas Perry: How strong do you think these motivations listed here are, and what do you think will be the main things that drive firms or institutions or organizations to adopt the Windfall Clause?

Cullen O'Keefe: I think it varies from firm to firm. I think a big one that's not listed here is how management likes the idea of a Windfall Clause. Obviously, they're the ones ultimately making the decisions, so that makes sense. I think employee buy-in and enthusiasm about the Windfall Clause or similar ideas will ultimately be a pretty big determinate about whether this actually gets implemented. That's why I would love to hear and see engagement around this topic from people in the technology industry.

Lucas Perry: Something that we haven't talked about yet is the distribution mechanism. And in your paper, you come up with desiderata and important considerations for an effective and successful distribution mechanism. Philanthropic effectiveness, security from improper influences, political legitimacy and buy in from AI labs. So, these are just guiding principals for helping to develop the mechanism for distribution. Could you comment on what the mechanism for distribution is or could be and how these desiderata will guide the formation of that mechanism?

Cullen O'Keefe: A lot of this thinking is guided by a few different things. One is just involvement in the effective altruism community. I as a member of that community, spend a lot of time thinking about how to make philanthropy work well. That said, I think that the potential scale of the Windfall Clause requires thinking about factors other than effectiveness, in the way that effectiveness altruists think of that. Just because the scale of potential resources that you're dealing here, begins to look less and less like traditional philanthropy and more and more like psuedo or para-government institution. And so that's why I think things like accountability and legitimacy become extra important in the Windfall Clause context. And then firm buy-in I mentioned, just because part of the actual process of negotiating an eventual Windfall Clause would presumably be coming up with distribution mechanism that advances some of the firms objectives of getting positive publicity or goodwill from agreeing to the Windfall Clause, both with their consumers and also with employers and governments.

And so they're key stakeholders in coming up with that process as well. This all happens in the backdrop of a lot of popular discussion about the role of philanthropy in society, such as recent criticism of mega-philanthropy. I take those criticisms pretty seriously and want to come up with a Windfall Clause distribution mechanism that manages those better than current philanthropy. It's a big task in itself and one that needs to be taken pretty seriously.

Lucas Perry: Is the windfall function synonymous with the windfall distribution mechanism?

Cullen O'Keefe: No. So, the windfall function, it's the mathematical function that determines how much money, signatories to the Windfall Clause are obligated to give.

Lucas Perry: So, the windfall function will be part of the windfall contract, and the windfall distribution mechanism is the vehicle or means or the institution by which that output of the function is distributed?

Cullen O'Keefe: Yeah. That's exactly right. Again, I like to think of this as top of the funnel, bottom of the funnel. So the windfall function is kind of the top of the funnel. It defines how much money has to go in to the Windfall Clause system and then the bottom of the funnel is like the output, what actually gets done with the windfall, to advance the goals of the Windfall Clause.

Lucas Perry: Okay. And so here you have some desiderata for this function, in particular transparency, scale sensitivity, adequacy, pre-windfall commitment, incentive alignment and competitiveness. Are there any here that you want to comment on with regards to the windfall function.

Cullen O'Keefe: Sure. If you look at the windfall function, it looks kind of like a progressive tax system. You fall in to some bracket and the bracket that you're in determines the marginal percentage of money that you owe. So, in a normal income tax scheme, the bracket is determined by your gross income. In the Windfall Clause scheme, the bracket is determined by a slightly modified thing, which is profits as a percent of gross world product, which we started off talking about.

We went back and forth for a few different ways that this could look, but we ultimately decided upon a simpler windfall function that looks much like an income tax scheme, because we thought it was pretty transparent and easy to understand. And for a project as potentially important as the Windfall Clause, we thought that was pretty important that people be able to understand the contract that's being negotiated, not just the signatories.

Lucas Perry: Okay. And you're bringing up this point about taxes. One thing that someone might ask is, "Why do we need a whole Windfall Clause when we could just have some kind of tax on benefits accrued from AI?" But the very important feature to be mindful here, about the Windfall Clause, is that it does something that taxing cannot do, which is redistribute funding from tech heavy first world countries to people around the world, rather than just to the government of the country able to tax them. So that also seems to be a very important consideration here for why the Windfall Clause is important, rather than just some new tax scheme.

Cullen O'Keefe: Yeah. Absolutely. And in talking to people about the Windfall Clause, this is one of the top concerns that comes up. So, you're right to emphasize it. I agree that the potential for international distribution is one of the main reasons that I personally are more excited about the Windfall Clause than standard corporate taxation. Other reasons are just that it seems just more tractable to negotiate this individually with firms, a number of firms potentially in a position of developing advanced AI is pretty small now and might continue to be small for the foreseeable future. So the number of potential entities that you have persuaded to agree to this might be pretty small.

There's also the possibility that we mention, but don't propose an exact mechanism for in the paper of allowing taxation to supersede the Windfall Clause. So, if a government came up with a better taxation scheme, you might either release the signatories from the Windfall Clause or just have the windfall function compensate for that by reducing or eliminating total obligation. Of course, it gets tricky because then you would have to decide which types of taxes would you do that for, if you want to maintain the international motivations of the Windfall Clause. And you would also have to kind of figure out what the optimal tax rate is, which is obviously no small task. So those are definitely complicated questions, but at least in theory, there's the possibility for accommodating those sorts of ex-post taxation efforts in a way that doesn't burden firms too much.

Lucas Perry: Do you have any more insights or positives or negatives to comment here about the windfall function. It seems like in the paper, it is as you mention, open for a lot more research. Do you have directions for further investigation of the windfall function?

Cullen O'Keefe: Yeah. It's one of the things that we lead out with, and it's actually as you're saying. This is primarily supposed illustrative and not the right windfall function. I'd be very surprised if this was ultimately the right way to do this. Just because the possibility in this space is so big and we've explored so little of it. One of the ideas that I am particularly excited about, and I think more and more might ultimately be the right thing to do, is instead of having a profits based trigger for the windfall function, instead having a market tap based trigger. And there are just basic accounting reasons why I'm more excited about this. Tracking profits is not as straight forward as it seems, because firms can do stuff with their money. They can spend more of it and reallocate it in certain ways. Whereas it's much harder and they have less incentive to downward manipulate their stock price or market capitalization. So I'd be interested in potentially coming up with more value based approaches to the windfall function rather than our current one, which is based on profits.

That said, there is a ton of other variables that you could tweak here, and would be very excited to work with people or see other proposals of what this could look like.

Lucas Perry: All right. So this is an open question about how the windfall function will exactly look. Can you provide any more clarity on the mechanism for distribution, keeping mind here the difficulty of creating an effective way of distributing the windfall, which you list as the issues of effectiveness, accountability, legitimacy and firm buy-in?

Cullen O'Keefe: One concrete idea that I actually worked closely with FLI on, specifically with Anthony Aguirre and Jared Brown, was the windfall trust idea, which is basically to create a trust or kind of psuedo-trust that makes every person in world or as many people as we can, reach equal beneficiaries of a trust. So, in this structure, which is on page 41 of the report if people are interested in seeing it. It's pretty simple. The idea is that the successful developer would satisfy their obligations by paying money to a body called the Windfall Trust. For people who don't know what trust is, it's a specific type of legal entity. And then all individuals would be either or actual or potential beneficiaries of the Windfall Trust, and would receive equal funding flows from that. And could even receive equal input in to how the trust is managed, depending on how the trust was set up.

Trusts are also exciting because they are very flexible mechanisms that you can arrange the governance of in many different ways. And then to make this more manageable, obviously a single trust with eight billion beneficiaries seems hard to manage, so you take a single trust for every 100,000 people or whatever number you think is manageable. I'm kind of excited about that idea, I think it hits a lot of the desiderata pretty well and could be a way in which a lot of people could see benefit from the windfall.

Lucas Perry: Are there any ways of creating proto-windfall clauses or proto-windfall trusts to sort of test the idea before transformative AI comes on the scene?

Cullen O'Keefe: I would be very excited to do that. I guess one thing I should say, OpenAI where I currently work, has a structure called a capped-profit structure, which is similar in many ways to the Windfall Clause. Our structure is such that profits above a certain cap that can be returned to investors, go to a non-profit, which is the OpenAI non-profit, which then has to use those funds for charitable purposes. But I would be very excited to see new companies and potentially companies aligned with the mission of the FLI podcast, to experiment with structures like this. In the fourth section of the report, we talk all about different precedents that exist already, and some of these have different features that are close to the Windfall Clause. And I'd be interested in someone putting all those together for their start-up or their company and making a kind of pseudo-windfall clause.

Lucas Perry: Let's get in to the legal permissibility of the Windfall Clause. Now you said that this is actually one of the reasons why you first got in to this, was because it got tabled because people were worried about the fiduciary responsibilities that companies would have. Let's start by reflecting on whether or not this is legally permissible in America, and then think about China, because these are the two biggest AI players today.

Cullen O'Keefe: Yeah. There's actually a slight wrinkle there that we might also have to talk about, the Cayman Islands. But we'll get to that. I guess one interesting fact about the Windfall Clause report, is that it's slightly weird that I'm the person that ended up writing this. You might think an economist should be the person writing this, since it deals so much with labor economics and inequality, etcetera, etcetera. And I'm not an economist by any means. The reason that I got swept up in this is because of the legal piece. So I'll first give a quick crash course in corporate law, because I think it's an area than not a lot of people understand and it's also important for this.

Corporations are legal entities. They are managed by a board of directors for the benefit of the shareholders, who are the owners of the firm. And accordingly, since the directors have the responsibility of managing a thing which is owned in part by other people, they owe certain duties to the shareholders. There are known as fiduciary duties. The two primary ones are the duty of loyalty and the duty of care. So, duty of loyalty, we don't really talk about a ton in this piece, just the duty to manage the corporation for the benefit of the corporation itself, and not for the personal gain of the directors.

The duty of care is kind of what it sounds like, just the duty to take adequate care that the decisions made for the corporation by the board of directors will benefit the corporation. The reason that this is important for the purposes of a Windfall Clause and also for the endless speculation of corporate law professors and theorists, is when you engage in corporate philanthropy, it kind of looks like you're doing something that is not for the benefit of the corporation. By definition, giving money to charity is primarily a philanthropic act or at least that's kind of the prima facie case for why that might be a problem from the standpoint of corporate law. Because this is other people's money largely, and the corporation is giving it away, seemingly not for the benefit of the corporation itself.

There actually hasn't been that much case law, so actual court decisions on this issue. I found some of them across the US. As a side note, we primarily talk about Delaware law, because Delaware is the state in which the plurality of American corporations are incorporated for historical reasons. Their corporate law is by far the most influential in the United States. So, even though you have this potential duty of care issue, with making corporate donations, the standard by which directors are judged is the business judgment rule. Quoting from the American Law Institute, a summary of the business judgment rule is, "A director or officer who makes a business judgment in good faith, fulfills the duty of care if the director or officer, one, is not interested," that means there is no conflict of interest, "In the subject of the business judgment. Two, is informed with respect to the business judgment to the extent that the director or officer reasonably believes to be appropriate under the circumstances. And three, rationally believes that the business judgment is in the best interests of the corporation." So this is actually a pretty forgiving standard. It's basically just use your best judgement standard, which is why it's very hard for shareholders to successfully make a case that a judgement was a violation of the business judgement rules. It's very rare for such challenges to actually succeed.

So a number of cases have examined the relationship of the business judgement rule to corporate philanthropy. They basically universally held that this is a permissible invocation or permissible example of the business judgement rule. That there are all these potential benefits that philanthropy could give to the corporation, therefore corporate directors decision to authorize corporate donations would be generally upheld under the business judgement rule, provided all these other things are met.

Lucas Perry: So these firm motivations that we touched on earlier were generating goodwill towards the company, improving employee relations and then reducing political risk I guess is also like having good faith with politicians who are, at the end of the day, hopefully being held accountable by their constituencies.

Cullen O'Keefe: Yeah, exactly. So these are all things that could plausibly, financially benefit the corporation in some form. So in this sense, corporate philanthropy looks less like a donation and more like an investment in the firm's long term profitability, given all these soft factors like political support and employee relations. Another interesting wrinkle to this, if you read the case law of these corporate donation cases, they're actually quite funny. The only case I quote from would be Sullivan v. Hammer. A corporate director wanted to make a corporate donation to an art museum, that had his name and kind of served basically as his personal art collection, more or less. And the court kind of said, this is still okay under business judgement rule. So, that was a pretty shocking example of how lenient this standard is.

Lucas Perry: So then they synopsis version here, is that the Windfall Clause is permissible in the United States, because philanthropy in the past has been seen as still being in line with fiduciary duties. And the Windfall Clause would do the same.

Cullen O'Keefe: Yeah, exactly. The one interesting wrinkle about the Windfall Clause that might distinguish it from most corporate philanthropy but though definitely not all, is that it has this potentially very high ex-post cost, even though it's ex-ante cost might be quite low. So in a situation which a firm actually has to pay out the Windfall Clause, it's very, very costly to the firm. But the business judgement rule, there's actually a post to protect these exact types of decisions, because the things that courts don't want to do is be second guessing every single corporate decision with the benefit of hindsight. So instead, they just instruct people to look at the ex-ante cost benefit analysis, and defer to that, even if ex-post it turns out to have been a bad decision.

There's an analogy that we draw to stock option compensation, which is very popular, where you give an employee a block of stock options, that at the time is not very valuable because it's probably just in line with the current value of the stock. But ex-post might be hugely valuable and this how a lot of early employees of companies get wildly rich, well beyond what they would have earned at fair market and cash value ex-ante. That sort of ex-ante reasoning is really the important thing, not the fact that it could be worth a lot ex-post.

One of the interesting things about the Windfall Clause is that it is a contract through time, and potentially over a long time. A lot of contracts that we make are pretty short term focus. But the Windfall Clause is in agreement now to do stuff, is stuff happens in the future, potentially in the distant future, which is part of the way the windfall function is designed. It's designed to be relevant over a long period of time especially given the uncertainty that we started off talking about, with AI timelines. The important thing that we talked about was the ex-ante cost which means the cost to the firm in expected value right now. Which is basically the probability that this ever gets triggered, and if it does get triggered, how much will it be worth, all discounted by the time value of money etcetera.

One thing that I didn't talk about is that there's some language in some court cases about limiting the amount of permissible corporate philanthropy to a reasonable amount, which is obviously not a very helpful guide. But there's a court case saying that this should be determined by looking to the charitable giving deduction, which is I believe about 10% right now.

Lucas Perry: So sorry, just to get the language correct. It's the ex-post cost is very high because after the fact you have to pay huge percentages of your profit?

Cullen O'Keefe: Yeah.

Lucas Perry: But it still remains feasible that a court might say that this violates fiduciary responsibilities right?

Cullen O'Keefe: There's always the possibility that a Delaware court would invent or apply new doctrine in application to this thing, that looks kind of weird from their perspective. I mean, this is a general question of how binding precedent is, which is an endless topic of conversation for lawyers. But if they were doing what I think they should do and just straight up applying precedent, I don't see a particular reason why this would be decided differently than any of the other corporate philanthropy cases.

Lucas Perry: Okay. So, let's talk a little bit now about the Cayman Islands and China.

Cullen O'Keefe: Yeah. So a number of significant Chinese tech companies are actually incorporated in the Cayman Islands. It's not exactly clear to me why this is the case, but it is.

Lucas Perry: Isn't it for hiding money off-shore?

Cullen O'Keefe: So I'm not sure if that's why. I think even if taxation is a part of that, I think it also has to do with capital restrictions in China, and also they want to attract foreign investors which is hard if they're incorporated in China. Investors might not trust Chinese corporate law very much. This is just my speculation right now, I don't actually know the answer to that.

Lucas Perry: I guess the question then just is, what is the US and China relationship with the Cayman Islands? What is it used for? And then is the Windfall Clause permissible in China?

Cullen O'Keefe: Right. So, the Cayman Islands is where the big three Chinese tech firms, Alibaba, Baidu and Tencent are incorporated. I'm not a Caymanian lawyer by any means, nor am I an expert in China law, but basically from my outsider reading of this law, applying my general legal knowledge, it appears that similar principals of corporate law apply in the Cayman Islands which is why it might be a popular spot for incorporation. They have a rule that looks like the business judgement rule. This is in footnote 120 if anyone wants to dig in to it in the report. So, for the Caymanian corporations, it looks like it should be okay for the same reason. China being a self proclaimed socialist country, also has a pretty interesting corporate law that actually not only allows but appears to encourage firms to engage in corporate philanthropy. From the perspective of their law, at least it looks potentially more friendly than even Delaware law, so kind of a-fortiori should be permissible there.

That said, obviously there's potential political reality to be considered there, especially also the influence of the Chinese government on state owned enterprises, so I don't want to be naïve as to just thinking what the law says is what is actually politically feasible there. But all that caveating aside, as far as the law goes, the People's Republic of China looks potentially promising for a Windfall Clause.

Lucas Perry: And that again matter, because China is currently second to the US in AI and are thus also likely potentially able to reach windfall via transformative AI in the future.

Cullen O'Keefe: Yeah. I think that's the general consensus, is that after the United States, China seems to be the most likely place to develop AGI for transformative AI. You can listen and read a lot of the work by my colleague Jeff Ding on this, who recently appeared on 80,000 Hours podcast, talking about China's AI dream and has a report by the same name, from FHI, that I would highly encourage everyone to read.

Lucas Perry: All right. Is it useful here to talk about historical precedents?

Cullen O'Keefe: Sure. I think one that's potentially interesting is that a lot of sovereign nations have actually dealt with this problem of windfall governance before. It's actually like natural resource based states. So Norway is kind of the leading example of this. They had a ton of wealth from oil, and had to come up with a way of distributing that wealth in a fair way. And as a sovereign wealth fund as a result, as do a lot of countries and provides for all sorts of socially beneficial applications.

Google actually when it IPO'd, gave one percent of its equity to it's non-profit arm, the Google Foundation. So that's actually significantly like the Windfall Clause in the sense that it gave a commitment that would grow in value as the firm's prospects engaged. And therefore had low ex-ante costs but potentially higher ex-post-cost. Obviously, in personal philanthropy, a lot of people will be familiar with pledges like Founders Pledge or the Giving What We Can Pledge, where people pledge a percentage of their personal income to charity. The Founders Pledge kind of most resembles the Windfall Clause in this respect. People pledge a percentage of equity from their company upon exit or upon liquidity events and in that sense, it looks a lot like a Windfall Clause.

Lucas Perry: All right. So let's get in to objections, alternatives and limitations here. First objection to the Windfall Clause, would be that the Windfall Clause will never be triggered.

Cullen O'Keefe: That certainly might be true. There's a lot of reasons why that might be true. So, one is that we could all just be very wrong about the promise of AI. Also AI development could unfold in some other ways. So it could be a non-profit or an academic institution or a government that develops windfall generating AI and no one else does. Or it could just be that the windfall from AI is spread out sufficiently over a large number of firms, such that no one firm earns windfall, but collectively the tech industry does or something. So, that's all certainly true. I think that those are all scenarios worth investing in addressing. You could potentially modify the Windfall Clause to address some of those scenarios.

hat said, I think there's a significant non-trivial possibility that such a windfall occurs in a way that would trigger a Windfall Clause, and if it does, it seems worth investing in solutions that could mitigate any potential downside to that or share the benefits equally. Part of the benefit of the Windfall Clause is that if nothing happens, it doesn't have any obligations. So, it's quite low cost in that sense. From a philanthropic perspective, there's a cost in setting this up and promoting the idea, etcetera, and those are definitely non-trivial costs. But the actual costs, signing the clause, only manifests upon actually triggering it.

Lucas Perry: This next one is that firms will find a way to circumvent their commitments under the clause. So it could never trigger because they could just keep moving money around in skillful ways such that the clause never ends up getting triggered. Some sub-points here are that firms will evade the clause by nominally assigning profits to subsidiary, parent or sibling corporations. That firms will evade the clause by paying out profits in dividends. That firms will sell all windfall generating AI assets to a firm that is not bound by the clause. Any thoughts on these here.

Cullen O'Keefe: First of all, a lot of these were raised by early commentators on the idea, and so I'm very thankful to those people for helping raise this. I think we probably haven't exhausted the list of potential ways in which firms could evade their commitments, so in general I would want to come up with solutions that are not just patch work solutions, but also more like general incentive alignment solutions. That said, I think most of these problems are mitigable by careful contractual drafting. And then potentially also searching to other forms of the Windfall Clause like something based on firm share price. But still, I think there are probably a lot of ways to circumvent the clause in its kind of early form that we've proposed. And we would want to make sure that we're pretty careful about drafting it and simulating potential ways that signatory could try to wriggle out of its commitment.

Cullen O'Keefe: I think it's also worth noting that a lot of those potential actions would be pretty clear violations of general legal obligations that signatories to a contract have. Or could be mitigated with pretty easy contractual clauses.

Lucas Perry: Right. The solution to these would be foreseeing them and beefing up the actual windfall contract to not allow for these methods of circumvention.

Cullen O'Keefe: Yeah.

Lucas Perry: So now this next one I think is quite interesting. No firm with a realistic chance of developing windfall generating AI would sign the clause. How would you respond to that?

Cullen O'Keefe: I mean, I think that's certainly a possibility, and if that's the case, then that's the case. It seems like our ability to change that might be pretty limited. I would hope that most firms in the potential position to be generating windfall, would take that opportunity as also carrying with it responsibility to follow the common good principle. And I think that a lot of people in those companies, both in leadership and the rank and file employee positions, do take that seriously. We do also think that the Windfall Clause could bring non-trivial benefits as we spent a lot of time talking about.

Lucas Perry: All right. The next one here is that quote, "If the public benefits of the Windfall Clause are supposed to be large, that is inconsistent with stating that the cost to firms will be small enough, that they would be willing to sign the clause." This has a lot to do with this distinction with the ex-ante and the ex-post differences in cost. And also how there is probabilities and time involved here. So, your response to this objection.

Cullen O'Keefe: I think there's some a-symmetries between the costs and benefit. Some of the costs are things that would happen in the future. So from a firms perspective, they should probably discount the costs of the Windfall Clause because if they earn windfall, it would be in future. From a public policy perspective, a lot of those benefits might not be as time sensitive. So you might no super-care when exactly those costs happen and therefore not really discount them from a present value standpoint.

Lucas Perry: You also probably wouldn't want to live in the world in which there was no distribution mechanism or windfall function for allocating the windfall profits from one of your competitors.

Cullen O'Keefe: That's an interesting question though, because a lot of corporate law principals suggest that firms should want to behave in a risk neutral sense, and then allow investors to kind of spread their bets according to their own risk tolerances. So, I'm not sure that this risks spreading between firms argument works that well.

Lucas Perry: I see. Okay. The next is that the Windfall Clause reduces incentives to innovate.

Cullen O'Keefe: So, I think it's definitely true that it will probably have some effect on the incentive to innovate. That almost seems like kind of necessary or something. That said, I think people in our community are kind of the opinion that there are significant externalities to innovation and not all innovation towards AGI is strictly beneficial in that sense. So, making sure that those externalities are balanced seems important. And the Windfall Clause is one way to do that. In general, I think that the disincentive is probably just outweighed by the benefits of the Windfall Clause, but I would be open to reanalysis of that exact calculus.

Lucas Perry: Next objection is, the Windfall Clause will shift investment to competitive non-signatory firms.

Cullen O'Keefe: This was another particularly interesting comment and it has a potential perverse effect actually. Suppose you have two types of firms, you have nice firms and less nice firms. And all the nice firms sign the Windfall Clause. And therefore their future profit streams are taxed more heavily than the bad firms. And this is bad, because now investors will probably want to go to bad firms because they offer potentially more attractive return on investment. Like the previous objection, this is probably true to some extent. It kind of depends on the empirical case about how many firms you think are good and bad, and also what the exact calculus is regarding how much this disincentives investors from giving to good firms and causes the good firms to act better.

We do talk a little bit about different ways in which you could potentially mitigate this with careful mechanism design. So you could have the Windfall Clause consist in subordinated obligations but the firm could raise senior equity or senior debt to the Windfall Clause such that new investors would not be disadvantaged by investing in a firm that has signed the Windfall Clause. Those are kind of complicated mechanisms, and again, this is another point where thinking through this from a very careful micro-economic point in modeling this type of development dynamic would be very valuable.

Lucas Perry: All right. So we're starting to get to the end here of objections or at least objections in the paper. The next is, the Windfall Clause draws attention to signatories in an undesirable way.

Cullen O'Keefe: I think the motivation for this objection is something like, imagine that tomorrow Boeing came out and said, "If we built a Death Star, we'll only use it for good." What are you talking about, building a Death Star? Why do you even have to talk about this? I think that's kind of the motivation, is talking about earning windfall is itself drawing attention to the firm in potentially undesirable ways. So, that could potentially be the case. I guess the fact that we're having this conversation suggests that this is not a super-taboo subject. I think a lot of people are generally aware of the promise of artificial intelligence. So the idea that the gains could be huge and concentrated in one firm, doesn't seem that worrying to me. Also, if a firm was super close to AGI or something, it would actually be much harder for them to sign on to the Windfall Clause, because the costs would be so great to them in expectation, that they probably couldn't justify it from a fiduciary duty standpoint.

So in that sense, signing on to the Windfall Clause at least from a purely rational standpoint, is kind of negative evidence that a firm is close to AGI. That said, there is certainly psychological elements that complicate that. It's very cheap for me to just make a commitment that says, oh sure if I get a trillion dollars, I'll give 75% of it some charity. Sure, why not? I'll make that commitment right now in fact.

Lucas Perry: It's kind of more efficacious if we get firms to adopt this sooner rather than later, because as time goes on, their credences in who will hit AI windfall will increase.

Cullen O'Keefe: Yeah. That's exactly right. Assuming timelines are constant, the clock is ticking on stuff like this. Every year that goes by, committing to this gets more expensive to firms, and therefore rationally, less likely.

Lucas Perry: All right. I'm not sure that I understand this next one, but it is, the Windfall Clause will lead to moral licensing. What does that mean?

Cullen O'Keefe: So moral licensing is a psychological concept, that if you do certain actions that either are good or appear to be good, that you're more like to do bad things later. So you have a license to act immorally because of the times that you acted morally. I think a lot of times this is a common objection to corporate philanthropy. People call this ethics washing or green washing, in the context of environmental stuff specifically. I think you should again, do pretty careful cost benefit analysis here to see whether the Windfall Clause is actually worth the potential licensing effect that it has. But of course, one could raise this objection to pretty much any pro-social act. Given that we think the Windfall Clause could actually have legally enforceable teeth, it seems kind of less likely unless you think that the licensing effects would just be so great that they'll overcome the benefits of actually having an enforceable Windfall Clause. It seems kind of intuitively implausible to me.

Lucas Perry: Here's another interesting one. The rule of law might not hold if windfall profits are achieved. Human greed and power really kicks in and the power structures which are meant to enforce the rule of law no longer are able to, in relation to someone with AGI or superintelligence. How do you feel about this objection?

Cullen O'Keefe: I think it's a very serious one. I think it's something that perhaps the AI safety maybe should be investing more in. I'm also having an interesting discussion, asynchronously on this with Rohin Shah on the EA Forum. I do think there's a significant chance that if you have an actor that is potentially as powerful as a corporation with AGI and all the benefits that come with that at its disposal, could be such that it would be very hard to enforce the Windfall Clause against it. That said, I think we do kind of see Davids beating Goliaths in the law. People do win lawsuits against the United States government or very large corporations. So it's certainly not the case that size is everything, though it would be naïve to suppose that it's not correlated with the probability of winning.

Other things to worry about, are the fact that this corporation will have very powerful AI that could potentially influence the outcome of cases in some way or perhaps hide ways in which it was evading the Windfall Clause. So, I think that's worth taking seriously. I guess just in general, I think this issue is worth a lot of investment from the AI safety and AI policy communities, for reasons well beyond the Windfall Clause. And it seems like a problem that we'll have to figure out how to address.

Lucas Perry: Yeah. That makes sense. You brought up the rule of law not holding up because of its power to win over court cases. But the kind of power that AGI would give, would also potentially far extend beyond just winning court cases right? In your ability to not be bound by the law.

Cullen O'Keefe: Yeah. You could just act as a thug and be beyond the law, for sure.

Lucas Perry: It definitely seems like a neglected point, in terms of trying to have a good future with beneficial AI.

Cullen O'Keefe: I'm kind of the opinion that this is pretty important. It just seems like that this is just also a thing in general, that you're going to want of a post-AGI world. You want the actor with AGI to be accountable to something other than its own will.

Lucas Perry: Yeah.

Cullen O'Keefe: You want agreements you make before AGI to still have meaning post-AGI and not just depend on the beneficence of the person with AGI.

Lucas Perry: All right. So the last objection here is, the Windfall Clause undesirably leaves control of advanced AI in private hands.

Cullen O'Keefe: I'm somewhat sympathetic to the argument that AGI is just such an important technology that it ought to be governed in a pro-social way. Basically, this project doesn't have a good solution to that, other than to the extent that you could use Windfall Clause funds to perhaps purchase share stock from the company or have a commitment in shares of stock rather than in money. On the other hand, private companies are doing a lot of very important work right now, in developing AI technologies and are kind of the current leading developers of advanced AI. It seems to me like their behaving pretty responsibility overall. I'm just not sure what the ultimate ideal arrangement of ownership of AI will look like and want to leave that open for other discussion.

Lucas Perry: All right. So we've hit on all of these objections, surely there are more objections, but this gives a lot for listeners and others to consider and think about. So in terms of alternatives for the Windfall Clause, you list four things here. They are windfall profits should just be taxed. We should rely on anti-trust enforcement instead. We should establish a sovereign wealth fund for AI. We should implement a universal basic income instead. So could you just go through each of these sequentially and give us some thoughts and analysis on your end?

Cullen O'Keefe: Yeah. We talked about taxes already, so is it okay if I just skip that?

Lucas Perry: Yeah. I'm happy to skip taxes. The point there being that they will end up only serving the country in which they are being taxed, unless that country has some other mechanism for distributing certain kinds of taxes to the world.

Cullen O'Keefe: Yeah. And it also just seems much more tractable right now to work on, private commitments like the Windfall Clause rather than lobbying for pretty robust tax code.

Lucas Perry: Sure. Okay, so number two.

Cullen O'Keefe: So number two is about anti-trust enforcement. This was largely spurred by a conversation with Haydn Belfield. The idea here is that in this world, the AI developer will probably be a monopoly or at least extremely powerful in its market, and therefore we should consider anti-trust enforcement against it. I guess my points are two-fold. Number one is that just under American law, it is pretty clear that merely possessing monopoly power is not itself a reason to take anti-trust action. You have to have acquired that monopoly power in some illegal way. And if some of the stronger hypothesis about AI are right, AI could be a natural monopoly and so it seems pretty plausible that an AI monopoly could develop without any illegal actions taken to gain that monopoly.

I guess second, the Windfall Clause addresses some of the harms from monopoly, though not all of them, by transferring some wealth from shareholders to everyone and therefore transferring some wealth from shareholders to consumers.

Lucas Perry: Okay. Could focusing on anti-trust enforcement alongside the Windfall Clause be beneficial?

Cullen O'Keefe: Yeah. It certainty could be. I don't want to suggest that we ought not to consider anti-trust, especially if there's a natural reason to break up firms or if there's a natural violation of anti-trust law going on. I guess I'm pretty sympathetic to the anti-trust orthodoxy that monopoly is not in itself a reason in itself to break up a firm. But I certainly think that we should continue to think about anti-trust as a potential response to these situations.

Lucas Perry: All right. And number three is we should establish a sovereign wealth fund for AI.

Cullen O'Keefe: So this is an idea that actually came out of FLI. Anthony Aguirre has been thinking about this. The idea is to set up something that looks like the sovereign wealth funds that I alluded to earlier, that places like Norway and other resource rich countries have. Some better and some worse governed, I should say. And I think Anthony's suggestion was to set this up as a fund that held shares of stock of the corporation, and redistributed wealth in that way. I am sympathetic to this idea overall as I mentioned, I think stock based Windfall Clause could be potentially be an improvement over the cash based one that we suggest. That said, I think there are significant legal problems here if that's kind of make this harder to imagine working. For one thing, it's hard to imagine the government buying up all these shares of stock companies, just to acquire a significant portion of them so that you have a good probability of capturing a decent percentage of future windfall, you would have to just spend a ton of money.

Secondly, they couldn't expropriate the shares of stock, but it would require just compensation under the US Constitution. Third, there are ways that corporations can prevent from accumulating a huge share of its stock if they don't want it to, the poison pills, the classic example. So if the firms didn't want a sovereign automation fund to buy up significant shares of their fund, which they might not want to since it might not govern in the best interest of other shareholders, they could just prevent it from acquiring a controlling stake. So all those seem like pretty powerful reasons why contractual mechanisms might be preferable to that kind of sovereign automation fund.

Lucas Perry: All right. And the last one here is, we should implement a universal basic income instead.

Cullen O'Keefe: Saving kind of one of the most popular suggestions for last. This isn't even really an alternative to the Windfall Clause, it's just one way that the Windfall Clause could look. And ultimately I think UBI is a really promising idea that's been pretty well studied. Seems to be pretty effective. It's obviously quite simple, has widespread appeal. And I would be probably pretty sympathetic to a Windfall Clause that ultimately implements a UBI. That said, I think there are some reasons that you might you prefer other forms of windfall distribution. So one is just that UBI doesn't seem to target people particularly harmed by AI for example, if we're worried about a future with a lot of automation of jobs. UBI might not be the best way to compensate those people that are harmed.

Others address that it might not be the best opportunity for providing public goods, if you thought that that's something that the Windfall Clause should do, but I think it could be a very promising part of the Windfall Clause distribution mechanism.

Lucas Perry: All right. That makes sense. And so wrapping up here, are there any last thoughts you'd like to share with anyone particularly interested in the Windfall Clause or people in policy in government who may be listening or anyone who might find themselves at a leading technology company or AI lab?

Cullen O'Keefe: Yeah. I would encourage them to get in touch with me if they'd like. My email address is listed in the report. I think just in general, this is going to be a major challenge for society in the next century. At least it could be. As I said, I think there's substantial uncertainty about a lot of this, so I think there's a lot of potential opportunities to do research, not just in economics and law, but also in political science and thinking about how we can govern the windfall that artificial intelligence brings, in a way that's universally beneficial. So I hope that other people will be interested in exploring that question. I'll be working with the Partnership on AI to help think through this as well and if you're interested in those efforts and have expertise to contribute, I would very much appreciate people getting touch, so they can get involved in that.

Lucas Perry: All right. Wonderful. Thank you and everyone else who helped to help work on this paper. It's very encouraging and hopefully we'll see widespread adoption and maybe even implementation of the Windfall Clause in our lifetime.

Cullen O'Keefe: I hope so too, thank you so much Lucas.

View transcript

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram