AI Should Provide a Shared Benefit for as Many People as Possible

Shared Benefit Principle: AI technologies should benefit and empower as many people as possible.

Today, the combined wealth of the eight richest people in the world is greater than that of the poorest half of the global population. That is, 8 people have more than the combined wealth of 3,600,000,000 others.

This is already an extreme example of income inequality, but if we don’t prepare properly for artificial intelligence, the situation could get worse. In addition to the obvious economic benefits that would befall whoever designs advanced AI first, those who profit from AI will also likely have: access to better health care, happier and longer lives, more opportunities for their children, various forms of intelligence enhancement, and so on.

A Cultural Shift

Our approach to technology so far has been that whoever designs it first, wins — and they win big. In addition to the fabulous wealth an inventor can accrue, the creator of a new technology also assumes complete control over the product and its distribution. This means that an invention or algorithm will only benefit those whom the creator wants it to benefit. While this approach may have worked with previous inventions, many are concerned that advanced AI will be so powerful that we can’t treat it as business-as-usual.

What if we could ensure that as AI is developed we all benefit? Can we make a collective — and pre-emptive — decision to use AI to help raise up all people, rather than just a few?

Joshua Greene, a professor of psychology at Harvard, explains his take on this Principle: “We’re saying in advance, before we know who really has it, that this is not a private good. It will land in the hands of some private person, it will land in the hands of some private company, it will land in the hands of some nation first. But this principle is saying, ‘It’s not yours.’ That’s an important thing to say because the alternative is to say that potentially, the greatest power that humans ever develop belongs to whoever gets it first.”

AI researcher Susan Craw also agreed with the Principle, and she further clarified it.

“That’s definitely a yes,” Craw said, “But it is AI technologies plural, when it’s taken as a whole. Rather than saying that a particular technology should benefit lots of people, it’s that the different technologies should benefit and empower people.”

The Challenge of Implementation

However, as is the case with all of the Principles, agreeing with them is one thing; implementing them is another. John Havens, the Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, considered how the Shared Benefit Principle would ultimately need to be modified so that the new technologies will benefit both developed and developing countries alike.

“Yes, it’s great,” Havens said of the Principle, before adding, “if you can put a comma after it, and say … something like, ‘issues of wealth, GDP, notwithstanding.’ The point being, what this infers is whatever someone can afford, it should still benefit them.”

Patrick Lin, a philosophy professor at California Polytechnic State University, was even more concerned about how the Principle might be implemented, mentioning the potential for unintended consequences.

Lin explained: “Shared benefit is interesting, because again, this is a principle that implies consequentialism, that we should think about ethics as satisfying the preferences or benefiting as many people as possible. That approach to ethics isn’t always right. … Consequentialism often makes sense, so weighing these pros and cons makes sense, but that’s not the only way of thinking about ethics. Consequentialism could fail you in many cases. For instance, consequentialism might green-light torturing or severely harming a small group of people if it gives rise to a net increase in overall happiness to the greater community.”

“That’s why I worry about the … Shared Benefit Principle,” Lin continued. “ makes sense, but implicitly adopts a consequentialist framework, which by the way is very natural for engineers and technologists to use, so they’re very numbers-oriented and tend to think of things in black and white and pros and cons, but ethics is often squishy. You deal with these squishy, abstract concepts like rights and duties and obligations, and it’s hard to reduce those into algorithms or numbers that could be weighed and traded off.”

As we move from discussing these Principles as ideals to implementing them as policy, concerns such as those that Lin just expressed will have to be addressed, keeping possible downsides of consequentialism and utilitarianism in mind.

The Big Picture

The devil will always be in the details. As we consider how we might shift cultural norms to prevent all benefits going only to the creators of new technologies — as well as considering the possible problems that could arise if we do so — it’s important to remember why the Shared Benefit Principle is so critical. Roman Yampolskiy, an AI researcher at the University of Louisville, sums this up:

“Early access to superior decision-making tools is likely to amplify existing economic and power inequalities turning the rich into super-rich, permitting dictators to hold on to power and making oppositions’ efforts to change the system unlikely to succeed. Advanced artificial intelligence is likely to be helpful in medical research and genetic engineering in particular making significant life extension possible, which would remove one the most powerful drivers of change and redistribution of power – death. For this and many other reasons, it is important that AI tech should be beneficial and empowering to all of humanity, making all of us wealthier and healthier.”

What Do You Think?

How important is the Shared Benefit Principle to you? How can we ensure that the benefits of new AI technologies are spread globally, rather than remaining with only a handful of people who developed them? How can we ensure that we don’t inadvertently create more problems in an effort to share the benefits of AI?

This article is part of a series on the 23 Asilomar AI Principles. The Principles offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start. … a work in progress.” The Principles represent the beginning of a conversation, and now we need to follow up with broad discussion about each individual principle. You can read the discussions about previous principles here.

10 replies
  1. Koen Buckinx
    Koen Buckinx says:

    Is there an age restriction? Wouldn’t the lagest homogenous group of people be children? If an advanced AI is beneficial to them, we’re probably on the right track.

  2. Timothy Rue
    Timothy Rue says:

    As well intended as the 23 Asilomar AI principles are, given the right situation many of the signers will violate one or more of the principles. This is the problem with voluntarism, it is the same problem the UN has. Many can agree but when it gets down to doing, enough of those who agreed won’t do.

    Has the discussion on AI ethics become an excuse to have a conference or some part of a conference, for the sake of having a conference or filling in a conference time allotment? If you took all the past and current AI ethics talks, written works, and discussions and lined them up end to end, how long would it take to get through all of it? And by the time you do get through it all, how much more has been generated?

    Perhaps the 23 Asilomar AI principles are the much simpler summary of all this ethics….. DATA. What would the output be if all this data was feed through a Machine learning process to, just see what the output is?

    Ultimately, the real question is not about some need to discuss AI ethics more, but how to make it happen, how to ensure it happens, the just doing it because if you don’t everyone will know.

    There are many signers of the 23 Asilomar AI Principles and some quite notable, however, as a verification test to my claim that not all signers will follow what they have signed, here is a fundamental computer software industry ethics violation that is long running.

    Typical standard primary user interfaces are the command line and graphical user interfaces, but there is a third, only it has been heavily constrained away from typical users access and without any valid ethical reason. We can call this third standard user interface (though non-existent) the UAI for the User Automation Interface. Simply nothing more than an easy to use, user-oriented and standardized across all applications, libraries, and devices, IPC port. One computer system that had all three were taken off the market and its user supporting it were given a soap opera of demise instead. That was the Commodore Amiga Computer, the Amiga OS with what was called the Arexx port thought it the IPC port, not the optional Arexx Language.

    Why is this so important? Do you give a painter, an artist, only two of the primary colors to paint with? Where is the artist ability to paint a rainbow? Likewise, Users should have been able to, at least, access all the same functionality they have been able to through the command line and GUI, but in a manner where they are able to mix the colors, to integrate functionality across applications as they might occasionally or more often do in an automated manner (i.e. AWK was not intended to be used to create complex programs, but some users did so). There is more to all this direction of enabling the users, but this fundamental ethics violation of wrongful constraint of the users’ ability to do for themselves is never going to provide a viable foundation to build AI ethics compliance assurance on. The AI house will fail for its foundation is bad. To correct it, “better late than never”, will change the foundation of AI ethics in a fundamental way that can then find the assurance needed for compliance,

    I said there is more so here it is — (oops links are not allowed) “Voice of Global Governance” See section (D) Artificial Intelligence (A.I.) “Safety Use” and access its references for more details regarding Abstraction Physics but read the rest at least once (10 pages total) for it is an issue of ethics of inclusion. For what humans can do better than machines alone… Integrate… see (oops links are not allowed) Tug of War of Income and Competition. email me for the two links

    What the spirit and intent of the above fundamental ethics violation is: “You know how to survive? You make people need you. You survive because you make them need what you have, and then they have nowhere else to go.” — Young Bill Gates in Pirates of Silicon Valley.
    Fiction? Problem is, nobody but the end user wants to do the integrated automation one-offs and had the users been able to, AI would be far more acceptable as more would fundamentally understand it. Understanding the stone image of the beast of human mental processes and keep it constrained with such shared understanding. But today illusions, which are dangerous, are being promoted instead and the Ethics talks and works are they just to try to make people feel better about it or to actually do it?

  3. David Oker
    David Oker says:

    A.I. is coming. Let’s see, “what should we do with it?” “Should we help people, or not?” I know! I know! Let’s help people!

    “These people are all anxious that we shall behave well, and yet that we shall not question how we behave.” – Jacob Bronowski

    Everyone wants to push everyone else out of the way and be the person with the golden heart in a mad dash to be the good one . . . to put their views of morality on everyone else. This is how a dictator comes about.

    Some guy who thinks he’s got the answers but doesn’t. I just pointed out the Rooney rule at some A.I. youtube,

    “I know – we’ll make a bunch of rooney rules – the NFL rule to interview African Americans(and I’ve heard African Americans complain about being called African Americans) even though there was no racism that led to it.”

    The rooney rule is an artificiality that comes from racism in other spheres of life. All the A.I. ethics sounds to me like Rooney rules

  4. David Oker
    David Oker says:

    I had tried to explain the following to Elon Musk’s OpenAI group, http://wwwscientifichumanism.blogspot.com/2015/06/astro-picture-for-day-sophie-and-silas.html and got no reply. Hence, they behave in these ways and don’t want to talk about it.

    Since posting the above(which is after I emailed OpenA.I. about these understandings of fear of thinking of new concepts, and criticizing old entrenched ideas), I’ve made many more posts of fear and evasive thinking.

    – I think most people’s ideas of ethics are Platonic elements like – the Earth is composed of fire, earth, air, water, and a quintessence. Another way of saying it is they’re ideas of ethics is flat-earth like. They don’t know what rationality, scientific exploration and critical thinking is – much less irrationality. They don’t know what fear of new ideas and thinking is.

  5. David Oker
    David Oker says:

    In this youtube video, https://www.youtube.com/watch?v=9CO6M2HsoIA , the speaker scoffs at “guns don’t kill people, people do”, and then goes on to show slaughterbots.

    I’m not a gun person; I don’t own a gun. I had a pellet gun and shot a few lizards when like ten years old; but, that’s it. I don’t like all the hunting of mountain lions and even Bears(which I fear a lot more than Mountain Lions; I’m glad I don’t live in a place that has Bears). But, I agree with gun owners. “Guns don’t kill people, people do.”

    It’s easier to go after the A.I. than Humans. Just like it’s easier to go after scientists than the religious. Also, just like it was easier to shift blame on the black slaves when something went wrong. I’m in agreement with Eric Drexler, at least in his Engines of Creation, when he says, industrial accidents are not nearly the problem that human abuse of the technology. Curiously, I tried to explain to him irrationality a few years back, and he went nuts! Anyways, for proof that humans are the problem, more than some A.I becoming conscious and deciding to wipe out humanity(which would be a stupid thing to do; what if the A.I. finds it can’t solve a problem? That it gets stuck in a Godelian problem, and can’t get unstuck? It would be wise to keep a few examples . . . tongue in cheek here . . . of natural intelligence to get unstuck), here’s Tay, the Twitter bot that went racist, and started cussing everyone out.

    https://en.wikipedia.org/wiki/Tay_(bot)

    Why did Tay turn racist and start cussing? Because the A.I. is a reflection of those it’s learning from The A.I. is learning from the people; and what does it learn to do?

  6. David Oker
    David Oker says:

    Scientists today appear to be immature. They think someone is racist and anti-Semitic if you criticize religion. I just read an Free Inquiry article where the guy criticized the mythicist position that Jesus Christ is just a sungod where he did exactly that. He said because we point out contradictions and how the Hebrews took Canaanite gods of El and Asherah(Elohim’s Canaanite wife god) and framed them for themselves, and point out that the Exodus is impossible archaeologically, that constitutes anti-Jewish.

    People don’t know what immaturity is; and, those people are nazies. I pointed out things that challenge the normal views on Foresights Institute’s facebook, and without explaining to me and justifying why they kicked me out, they did so. I’ve confronted them about it, and there’s no reply. Christine Peterson and the Foresight Institute does not exist for rational thought. It exists to pat themselves on their backs. That’s why they made the Foresight facebook page, to place all kinds of people they don’t want to talk to there, and have NanoDot blog to themselves. There’s a real psychological problem with these scientists these days!

    Look at this! If you go to nanodot blog, which use to be busy, https://foresight.org/nanodot-blog/ you’ll see that you can’t post there anymore! You’re directed to post at the facebook page. But, what do they post? They post? They posts about each other!

    “Cyber, Nano, and AGI Risks: Computer Security and Effective Altruism

    Christine Peterson interviewed at Effective Altruism Global
    Christine Peterson, Foresight Institute Co-Founder and Projects Director

    Foresight Institute Co-Founder and Projects Director Christine Peterson (full biography) was interviewed recently by 80000 Hours, “an independent nonprofit funded by individual donors” and founded “because we couldn’t find any sources of advice on how to do good with our own working lives. Since 2011, we’ve been on a mission to figure out how best to choose a career with high social impact.” Among their many activities in support of this goal, they offer free, one-on-one career advice, and an extensive blog that includes interviews with major thinkers and doers on this and related fascinating and important topics.

    Christine’s interview, recorded by Robert Wiblin, director of research at 80,000 Hours, at Effective Altruism Global in San Francisco (August 13, 2017), provided content for two posts on the 80000 Hours blog. The first, posted on October 4, 2017, focused on a very current risk of advanced technology and what do about it: computer security and the risk of cyber warfare. One of three key points covered in the post:

    Present day computer systems are fundamentally insecure, allowing hacking by state-level actors to take down almost any service on the internet, including essential services such as the electricity grid. Automated hacking by algorithms in future could allow computer systems around the world to be rapidly taken down. Christine believes the only way to effectively deal with this problem is to change the operating systems we all use to those that have been designed for maximum security from the ground up. Christine and two colleagues recently released a paper on tackling this issue.

    Other topics addressed in the post include the importance of taking care “of your own health and welfare in order to be able to continue working hard on useful things for decades,” “life extension research, cryonics, and how to choose a life partner.”

    The second post, on October 6, 2017, focused on space colonization and nanotechnology and the Silicon Valley community of the 1970s and 80s. Robert Wilbin explains:

    One tricky thing about lengthy podcasts is that you cover a dozen issues, but when you give the episode a title you only get to tell people about one. With Christine Peterson’s interview I went with a computer security angle, which turned out to not be that viral a topic. But people who listened to the episode kept telling me how much they loved it. So I’m going to try publishing the interview in pieces, each focussed on a single theme we covered. Christine Peterson co-founded the Foresight Institute in the 90s. In the lightly edited transcript below we talk about a community she was part of in her youth, whose idealistic ambition bears some similarity to effective altruism today. We also cover a controversy from that time about whether nanotechnology would change the world or was impossible. Finally we think about what lessons we can learn from that whole era.

    The podcast, included in the first post, upon which the two posts are based, runs 1 h and 45 minutes. Each post includes a transcript and the first also includes a list of extra resources to learn more, including the above-cited paper “Cyber, Nano, and AGI Risks: Decentralized Approaches to Reducing Risks” by Christine Peterson, Mark S. Miller and Allison Duettmann. This 34-page paper (single spaced) combines a look at Foresight’s beginnings with an insightful discussion of how those founding interests evolved into the current focus on cybersecurity, biotech-based opportunities and threats, and the connection with the more distant benefits and challenges of atomically precise nanotechnology and artificial general intelligence. From the abstract:

    The aim of this paper, rather than attempting to present one coherent strategy for reducing existential risks, is to introduce a variety of possible options with the goal of broadening the discussion and inviting further investigation. Two themes appear throughout: (1) the proposed approaches for risk reduction attempt to avoid the dangers of centralized “solutions,” and (2) cybersecurity is not treated as a separate risk. Instead, trustworthy cybersecurity is a prerequisite for the success of our proposed approaches to risk reduction.

    Our focus is on proposing pathways for reducing risks from advanced nanotechnology and artificial general intelligence.

    A paper very well worth reading. On a personal note, I was glad to see the return of interest in one of my favorite sections of Engines of Creation, a proposal for “Inheritance Day.”
    —James Lewis, PhD”

  7. Len Gensens
    Len Gensens says:

    Hello, I appreciate this thoughtful in-depth discussion re Ai’s effects in the world & Shared Benefit Principle.
    Possibly Ai could be integrated into blockchain technology to provide the most benefit to the most people.
    I’d be interested in discussing this further with Ariel Conn or anyone pursuing Ai in any way
    Cheers, Len

  8. Murray
    Murray says:

    As a start shouldn’t we ensure the distribution and dissemination of the benefits of current and other new technology all ready be robust and in place? Also to clarify are we saying provide the most wide spread benefit or to provide help to those with the least well being first? If AI is first to increase healthy long life spans who is it helping most when areas of the world are suffering and dying before old age from specific diseases that need eradication? I think AI should be targeted at bringing global well being up to the same level with first world middle income citizens before progressing with everyone beyond.

  9. Lisa New
    Lisa New says:

    In reference to Lin above: “That’s why I worry about the … Shared Benefit Principle,” Lin continued. “[It] makes sense, but [it] implicitly adopts a consequentialist framework, which by the way is very natural for engineers and technologists to use, so they’re very numbers-oriented and tend to think of things in black and white and pros and cons, but ethics is often squishy. You deal with these squishy, abstract concepts like rights and duties and obligations, and it’s hard to reduce those into algorithms or numbers that could be weighed and traded off.”
    In my independent research I have concerned myself with the latter conundrum, and come up with a ethical, just and scientific framework and associated system (patent pending) where the ‘approximated truth’ from an expert recommender system utilises linguistic, cognitive and machine learning in a Web 4.0 environment such as Real-world Real-time streaming data Serious Gaming. In this approach collaborators commit to collectively create ‘acceptable’ Deep Learning ‘Best Bractice’ algorithms to describe and evaluate problem-solutions given collective peer-review of evidence at a point in time, where collaborator interaction with the classifiers and weights of individual data points of the deep learning machine learning algorithms that they co-create as ‘majority consensus about scientific level of high evidence of success and sustainability’ is actively assumption minimising (with support from a shared primitive upper ontology and formal language and associated optimisation algorithms). In this approach collaborators aim to achieve Shared Upper Goals to Prevent Harm and Optimise Risk and Resource Management, by co-creating a shared algorithmic taxonomy that they continue to refine, to Close Gaps between the Upper Goals and local Real-World temporal-spatial-event-interpretations, as holistic and contextual ethical, just and scientific algorithmic translations for optimal Deep Learning. They utilise Blockchain technology to securely store communication associated to their data input, cleaning and enhancement towards majority consensus development, which may be at a hypotheses development or evaluation level, and may be at any of an algorithmic programming code byte, quantified and qualified terminology, phrase, sentence, formula, model, plan, specification, standard, KPI, etc. level, with time stamps and cryptographic collaborator signatures, that enable clarity on collaborator declarations on professional motives, objectives, local work and evaluations, etc., and their collaborative data contributions with associated expert analyses, innovation input, innovation impact with societal benefit over time, etc. Such an approach has open collaboration and disruptive accelerated innovation ramifications under the principles of the Hippocratic Oath, with informed decision making and ‘as safe as possible’ associated action, given available collective wisdom, and recorded local preferences and resource constraints at the time. Historic knowledge becomes part of a Dynamic Shared Knowledge Repository (through collaborative data input with algorithmic metadata fine-grained content description and integration), together with Real-Time Innovation in an Open Collaboration Environment where de-silo-fication of data at a generic de-identified level (with digital permissioned layered access that is user controlled) where Future Open Collaborative Innovation for societal benefit is optimally empowered by Dynamic Safe AI as a Collaborative Continuous Improvement Process, with transparent innovation reward processes aligned with measurable impact.

  10. R. Rimini
    R. Rimini says:

    I don’t understand how this discussion can ignore centuries of both theorists and activists grappling with issues of economic distribution, class formation and conflict, ownership of material resources. If we haven’t already managed to work out solutions to these questions (we haven’t), how do the discussion participants imagine that solutions will now succeed for the case of AI? Or, maybe a better question, how could solutions *separately* succeed for the case of AI?
    Most students of history would be deeply skeptical about this kind of reasoned call to social justice successfully toppling deeply rooted and strenuously defended systems of power, ownership, advantage.
    Which raises the question, what might an alternative approach look like, in which the considerable energies and talents of the Future of Life enthusiasts are harnessed to an actually plausible path to generally equitable arrangements, including but not necessarily prioritizing AI? Because I don’t think you can get to where you’d like to get without grappling with that question.

Comments are closed.