How Do We Align Artificial Intelligence with Human Values?

A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best. What will trigger this change? Artificial intelligence.

Recently, some of the top minds in AI and related fields got together to discuss how we can ensure AI remains beneficial throughout this transition, and the result was the Asilomar AI Principles document. The intent of these 23 principles is to offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start. … a work in progress.”

The Principles represent the beginning of a conversation, and now that the conversation is underway, we need to follow up with broad discussion about each individual principle. The Principles will mean different things to different people, and in order to benefit as much of society as possible, we need to think about each principle individually.

As part of this effort, I interviewed many of the AI researchers who signed the Principles document to learn their take on why they signed and what issues still confront us.

Value Alignment

Today, we start with the Value Alignment principle.

Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

Stuart Russell, who helped pioneer the idea of value alignment, likes to compare this to the King Midas story. When King Midas asked for everything he touched to turn to gold, he really just wanted to be rich. He didn’t actually want his food and loved ones to turn to gold. We face a similar situation with artificial intelligence: how do we ensure that an AI will do what we really want, while not harming humans in a misguided attempt to do what its designer requested?

“Robots aren’t going to try to revolt against humanity,” explains Anca Dragan, an assistant professor and colleague of Russell’s at UC Berkeley, “they’ll just try to optimize whatever we tell them to do. So we need to make sure to tell them to optimize for the world we actually want.”

What Do We Want?

Understanding what “we” want is among the biggest challenges facing AI researchers.

“The issue, of course, is to define what exactly these values are, because people might have different cultures, [come from] different parts of the world, [have] different socioeconomic backgrounds — I think people will have very different opinions on what those values are. And so that’s really the challenge,” says Stefano Ermon, an assistant professor at Stanford.

Roman Yampolskiy, an associate professor at the University of Louisville agrees. He explains, “It is very difficult to encode human values in a programming language, but the problem is made more difficult by the fact that we as humanity do not agree on common values, and even parts we do agree on change with time.”

And while some values are hard to gain consensus around, there are also lots of values we all implicitly agree on. As Russell notes, any human understands emotional and sentimental values that they’ve been socialized with, but it’s difficult to guarantee that a robot will be programmed with that same understanding.

But IBM research scientist Francesca Rossi is hopeful. As Rossi points out, “there is scientific research that can be undertaken to actually understand how to go from these values that we all agree on to embedding them into the AI system that’s working with humans.”

Dragan’s research comes at the problem from a different direction. Instead of trying to understand people, she looks at trying to train a robot or AI to be flexible with its goals as it interacts with people. She explains, “At Berkeley, … we think it’s important for agents to have uncertainty about their objectives, rather than assuming they are perfectly specified, and treat human input as valuable observations about the true underlying desired objective.”

Rewrite the Principle?

While most researchers agree with the underlying idea of the Value Alignment Principle, not everyone agrees with how it’s phrased, let alone how to implement it.

Yoshua Bengio, an AI pioneer and professor at the University of Montreal, suggests “assured” may be too strong. He explains, “It may not be possible to be completely aligned. There are a lot of things that are innate, which we won’t be able to get by machine learning, and that may be difficult to get by philosophy or introspection, so it’s not totally clear we’ll be able to perfectly align. I think the wording should be something along the lines of ‘we’ll do our best.’ Otherwise, I totally agree.”

Walsh, who’s currently a guest professor at the Technical University of Berlin, questions the use of the word “highly.” “I think any autonomous system, even a lowly autonomous system, should be aligned with human values. I’d wordsmith away the ‘high,’” he says.

Walsh also points out that, while value alignment is often considered an issue that will arise in the future, he believes it’s something that needs to be addressed sooner rather than later. “I think that we have to worry about enforcing that principle today,” he explains. “I think that will be helpful in solving the more challenging value alignment problem as systems get more sophisticated.

Rossi, who supports the the Value Alignment Principle as, “the one closest to my heart,” agrees that the principle should apply to current AI systems. “I would be even more general than what you’ve written in this principle,” she says. “Because this principle has to do not only with autonomous AI systems, but … is very important and essential also for systems that work tightly with humans-in-the-loop and where the human is the final decision maker. When you have a human and machine tightly working together, you want this to be a real team.”

But as Dragan explains, “This is one step toward helping AI figure out what it should do, and continuously refining the goals should be an ongoing process between humans and AI.”

Let the Dialogue Begin

And now we turn the conversation over to you. What does it mean to you to have artificial intelligence aligned with your own life goals and aspirations? How can it be aligned with you and everyone else in the world at the same time? How do we ensure that one person’s version of an ideal AI doesn’t make your life more difficult? How do we go about agreeing on human values, and how can we ensure that AI understands these values? If you have a personal AI assistant, how should it be programmed to behave? If we have AI more involved in things like medicine or policing or education, what should that look like? What else should we, as a society, be asking?

10 replies
  1. Ben Duffy
    Ben Duffy says:

    Very difficult questions. I believe that WE will all have to change what we value to accommodate this technology. We are thinking short term when worrying about the if the A.I. shares our values that issue is very complex (lets just assume that it is safe). What will we do without jobs? Virtually every manufacturing job can be repaced by robotics. Including engineering processes when an A.I. system is able to to optimize and redesign a product. it wont be long before a doctor just scans u and and a.i. system will make a faster more accurate diagnosis than the doctor who spends years in medical school.
    why have a police officer take down a risky person when a group of automated drones could do so with less risk and no pension? air traffic control, taxis, … why let human error or health be a factor? I just read an article about investment firms using a.i. systems to make market predictions. Will it be A.I. verses A.I. ? Or will the market not fluctuate(balance itself) because the systems are so smart? will an A.I. system tell us how to produce a better version of ourselves with a better immune system, increased longevity? The modern world is changing so fast that i dont think we can stop these changes from happening. MAYBE THIS IS WHAT WE NEED. As of now we live on a ball withlimited space and resources. Yet, we base our economies on growth. Last time i checked growth in a limited space is unsustainable. Maybe A.I. will force humanity to change and redistribute the wealth. no jobs equals economic collapse which will force us and the a.i. system to come up with a design for better equality and peace. maybe with the extra time we save we will ne more interested in the rest of the the world. hopefully its a smooth transition. Something drastic is going to happen, lets be optimistic.

  2. Mindey
    Mindey says:

    > Value Alignment: … “be assured to align with human values” …

    I don’t think human values were the best values there can be… For what we know, human error is a common phenomenon. If we want to create something smarter than ourselves, I think we would like it to have universally good values, cause it’s not *just* about us, it’s about the Universe. Humanity may know what’s good for its own self, but I hope that we have wisdom, rather than humanity’s selfishness… Putting human values at the center is no wiser than saying that the Earth in the center of the Universe.

    If intelligence is the ability to optimize, and wisdom is the ability to optimize universally, a universal optimization won’t be bad for humanity either. I think, we should find mathematically provable definition and criterion to decide about universal good, and try to uniformly empower all life with computing resources.

    > What Do We Want?

    I think we want that everything that anyone truly wishes, would come true… I think we could formulate this mathematically.

    > Let the Dialogue Begin:
    > What does it mean to you to have artificial intelligence aligned with your own life goals and aspirations?
    > How can it be aligned with you and everyone else in the world at the same time?
    > How do we ensure that one person’s version of an ideal AI doesn’t make your life more difficult?
    > How do we go about agreeing on human values, and how can we ensure that AI understands these values?
    > If you have a personal AI assistant, how should it be programmed to behave?
    > If we have AI more involved in things like medicine or policing or education, what should that look like?
    > What else should we, as a society, be asking?

    Here is the beginning of my thoughts on them:

  3. Dawn Dunphy
    Dawn Dunphy says:

    The Asilomar AI Principles are an excellent starting point, & I thank you for opening up this discussion to the general public.

    There is, I fear, the potential for this discussion to be stalled by speculative arguments regarding potential interpretation of singular words, phrases, and even of the AI Principles themselves, as was demonstrated in the article that requested public. And, as was pointed out in said article, consensus has also been challenging to attain regarding the implementation of the AI Principles.

    As I began considering contributing I found myself contemplating whether or not I, a business professional with what I would consider only a cursory (though increasing) understanding of artificial intelligence, would be able to provide a meaningful or useful contribution to this dialogue. Even though I am well aware of the fact that AI currently does, or will in the future, impact and influence nearly every facet of our global society & of our lives, I question whether or not to join this dialogue.

    And I realized that just as I was intimidated by the very thought of joining a discussion with individuals far more knowledgeable in this field that I will ever be, so too would others be. This belief of not being ‘smart enough’, ‘informed enough’, or of being otherwise ‘unqualified’ to join this dialogue is a major barrier to the FLI attaining input from the general populace.

    Yet this diversity of perspectives is exactly what is needed in order to have any hope of achieving widespread compliance with the Asilomar AI principles.

    Because I recognized this need, I began crafting my response, despite my misgivings regarding the usefulness of the my contribution.

    It was quite vexing to discover that instead of having answers to the questions that were asked, I instead ended up raising more and more questions, many of which cannot be answered without further definition of, discussion of, and/or real life examples of the application of, the AI Principles.

    For these reasons, I’d like to make a suggestion that could achieve not only the stated goal of receiving public comment on the AI Principles themselves, but could serve to educate the general populace about AI, increase the number of individuals who are willing to engage in this dialogue, and which could potentially help define the implementation of the Asilomar AI Principles.

    I recommend the creation of a draft comprehensive ‘companion document’ for the Asilomar AI Principles & inviting public comment on the companion document, with the intention of the creation of an end product that would include the following (with notations for sections that would only be included during the ‘creation/public comment’ stage of this document development, which would help to shape the final product):
    • A glossary of definitions
    o With the inclusion of alternative interpretations and other definitional nuances that may only be currently known &/or understood by professionals within the AI industry
    • Working/debatable parameters for specific standards
    o For example – what is intended to be/is to be included in the phrase “personal data” in Principle 13 ‘Liberty and Privacy’, as individuals, governments, courts, companies, and countries all have their own views regarding what constitutes ‘personal data’
    • A listing of the AI Principles
    o During the initial creation /comments stage , this section could include space for questions & concerns regarding individual Principles to be raised, and a method for others to respond to the questions/concerns &/or provide potential solutions and/or clarifications
    • During the initial creation/comments stage, the inclusion of a section of general & specific questions, such as the ones listed towards the end of the article:
    o How do we ensure that one person’s version of an ideal AI doesn’t make your life more difficult?
    o How do we go about agreeing on human values, and how can we ensure that AI understands these values?
    o If you have a personal AI assistant, how should it be programmed to behave?
    o If we have AI more involved in things like medicine or policing or education, what should that look like?
    o What else should we, as a society, be asking?

    • A series of potential &/or known real life examples of the application of the AI Principles. These examples beneficial if they were to include examples of individuals at multiple vocational levels, & multiple vocational/professional fields, as this will help educate and inform both professionals and the general populace that every person can contribute to the safeguarding & stewardship of AI development and implementation.
    o During the initial creation/comments page, these examples should be created with an eye towards the challenges professionals seeking to abide by and honor the Principles may/could/will face
     This would help facilitate the robust discussion and debate that is needed regarding the application and implementation of the Principles within the field
    o Professionals and individuals commenting on, discussing, & intervening to this evolving draft should be encouraged to add their own examples, as well as to provide recommendations, suggestions, ideas, and tips regarding their interpretation of how the individual in the example could proceed, while honoring the AI Principles

     Example 1:
    • It is discovered that an employee of Black Mesa had been sending top secret weapons development information to Black Mesa’s rival, Aperture Science Laboratories, directly from the on-site desktop computers located at the remote Black Mesa Research Facility.
    o As a result, management at Black Mesa instruct one of their Computer Engineers to create an AI program that will allow the company to access every computer located within any Black Mesa-owned facility, no matter the no matter how remote, scan them for suspicious activity, and search the contents for ‘key words/phrases’ that have been identified to indicate illegal &/or subversive activity by a current employee/insider threat.
    o The Computer Engineer is instructed that the AI program must be capable of accessing the computers and performing the intended scans and searches – even if the employees utilizing them have taken extreme measures to mask or hide their activities
    o Just prior to the completion of the AI program, the Computer Engineer accidentally discovers that another engineer will be modifying the AI program so that it can search any computer, owned by any facility or contained on any network, as Black Mesa intends to sell the AI program to a tyrannical island dictatorship, which intends to utilize the program to conduct searches of citizens’ computers for the express purpose of “identifying subversives”.
     The Computer Engineer is aware that reports are emerging of immediate, public executions of anyone in the island nation who is identified by the government as a ‘subversive’
    • What can/should the Computer Engineer do?
    • Who should the Computer Engineer make aware of the potential dangers and use of the AI program s/he has developed?
    • Are there checks and balances that can be put in place to facilitate the prevention of this scenario from becoming a reality, and through what means should these checks and balances be implemented? (For example, industry – wide, governmental, through treaties/international agreements, etc.)

     Example 2:
    • A long-term, trusted researcher of Aperture Science Laboratories has just received funding to develop an advanced AI system capable of providing compound-wide high-level security oversight. This AI system is required to have the ability to, among other things, ascertain when the occupants of the compound are in imminent danger of harm by an outside force but are incapacitated (such as following an initial chemical attack that renders the occupants unconscious ), to assume command and control of the compound’s array of security and defensive systems, & to execute
    o How can/should the following people involved with this project implement and /or abide by the AI Principles?
     The long-term, experienced AI Professional tasked with leading the team that will be developing this AI system
     The new graduate who has just joined the Aperture Science Laboratories, & for whom this is the first ever AI project assignment
     The Managing Director of the Aperture Science Laboratories department under whose purview this project is to be created
     The accounts payable intern who, during a live, on-site usage test of the not-yet-armed AI system, is erroneously identified by the AI system as an ‘enemy combatant which must be neutralized’, followed by the displaying of code indicating the AI system’s chosen actions would have been to activate the facility security system that is designed to deliver a low voltage electric ‘warning shock’ to would-be intruders when they touched any part of the all-metal entry door – but which had been discovered to be severely malfunctioning, & if it had been armed & under the command of the AI system, would have resulted in the intern receiving an electric shock at levels no human could possibly survive.
    • During the subsequent incident investigation, it would be discovered that expiration date of the intern’s facility access badge had been incorrectly entered into the facility’s computer system. This caused the AI system to interpret the intern’s attempt to enter the facility as an attempt by an unauthorized individual to gain access the facility.
    o The AI system development team has not been able to identify why the intern was identified at the elevated level of an ‘enemy combatant’, but they are under extreme pressure from Aperture’s leadership to provide a prototype of the AI system to the agency that funded the project, with the agency intending to immediately begin live testing of the prototype.

    o What could each of the individuals do to forward the goal of preventing the AI system from malfunctioning in such a way that results in unnecessary harm to humans or other living beings either within the facility or outside of the facility
    o What could each of individuals to forward the goal of preventing control of the AI system being taken over by an external individual or entity?
    o Should the individuals implant an externally activatable ‘kill switch’ or other layers/means of a system deactivation into the AI system? And if so, how many? Who should maintain control of these methods of deactivation? Who should be aware of these methods of deactivation? And how should this knowledge transfer be preserved and facilitated?
    o If the individuals are concerned regarding the potential usage of such a system, who should they contact or make aware of the potential dangers the development of such a system would create?
    o What checks and balances can be put in place to facilitate the prevention of this scenario from becoming a reality, and through what means should these checks and balances be implemented? (For example, industry – wide, governmental, through treaties/international agreements, etc.)

    This draft companion document would serve a number of purposes, and facilitate the achievement of a number of implied and/or stated goals of the Future of Life Institute, as well as of the AI professional community as a whole, including:
    • Assist with ensuring the discussions & comments regarding the AI principles are occurring with all participants being ‘on the same page’ regarding technical definitions of the terms
    • Be a referenceable, distributable document that would serve to educate the general public on AI
    • Empower a much wider array of individuals from varying professional fields, educational backgrounds, & societal standings to participate in the discussion, & to do so in an informed and meaningful way
    • Provide a forum through which the AI Principles themselves could be fleshed out more thoroughly
    • Provide a working, referenceable document for professionals entering the field &/or working in the field of AI throughout the globe (including professionals who made be unable to attend conferences, or even communicate with the larger community of AI professionals)
    • Provide a written forum in which best practices, action plans, & potential implementations, can be discussed, developed , recorded, & disseminated
    • Aid in the defining, developing, and providing of actionable methods through which individuals and entities could contribute to and help facilitate the changes needed in the global society in order to achieve the AI Principles
    o For example – to facilitate the prevention of an AI arms race; to facilitate the voluntary compliance of individuals, companies, countries, and entities with the AI Principles; to aid in laying the groundwork for the future development and signing of an international agreement of governing bodies to abide by the Asilomar AI Principles
    • Inform and assist with the modifications &/or rewriting of the AI Principles
    • Provide actionable methods through which to implement the AI Principles

    After I completed this response, I read about how the Asilomar AI principles were developed, and was surprised to learn that it was through a similar method as defined above, but which had occurred in-person at the BIA 2017 Conference.

    In essence, my suggestion is to provide a similar context, via a written/computerized method, through which the general populace can explore, discuss, better understand, & provide meaningful, informed contributions to this dialogue. I believe that providing this context, & the opportunity to ask questions or for clarification, will increase the probability that FLI will receive input that reflects the diversity of experiences, opinions, cultures, and viewpoints that you are seeking.

    Side note: This document was created utilizing the voice recognition program Dragon Naturally Speaking, which I was not aware was defined as ‘narrow AI’ until I read the information provided here on the FLI website.

    And while I am now cognizant of the fact that, technically speaking, Dragon is categorized as a “non-sentient artificial intelligence that is focused on one narrow task” (utilization of computers completely by voice), there have been many, many times when the program has behaved in ways that have caused both myself and those around me to sincerely question whether or not the Dragon program has somehow attained sentience – and a wicked sense of humor to boot.

    • Anthony Aguirre
      Anthony Aguirre says:

      Thanks Dawn for this piece. The idea of creating something more detailed an in-depth than the principles is one FLI has been discussing internally. It’s quite tricky how to compose such a thing so that it both has real content, but little enough controversy for it to be taken as “official” in some way. Our current plan is to continue encouraging discussion like that here, to get a lot of ideas and reactions on the table, and then think if there are ways to try to pull it together into something more organized.

      Your examples are terrific, and I think compiling more like them would, in and of itself, be a useful service to the community – people may disagree on what to do about such situations, but it’s hard to disagree that they *might* arise, and that it’s worth considering in advance how we would deal with them.

    • Convolution
      Convolution says:

      A. On value alignment:
      I have a few reservations:
      Sometimes people do very different things, for the same reasons. Conversely, sometimes people do the same things for very different reasons.
      Long ago, I read an article that said the most important part of AI is what we can learn from it. It claimed that more advances are made, and faster, by outsiders, listing the wright brothers as an example. The author believed that the fundamental differences between humans and AI, rather than just being difficulties in terms of safety, would cause AI to have fundamentally different perspectives, that would be very useful.
      While reading it, I realized: if AGI can produce new ideas, then we can expect it to have great impact or advances not only once an ‘intelligence explosion’ occurs, but earlier, because it will come up with very different ideas from humans. Partially because of differences in difficulty, but also because of different ways of thinking, solving problems.
      Limitations. Should AGI be prevented from sharing possibly disastrous advances? I believe ideas and information have power, and power is neither good nor evil.

      B. AI discussion in general:
      As I note above, AI might not have to be ‘superintelligent’ or ‘explode in intelligence’, to have the impact we can conceive a superintelligent AI might. The global economy might change (/massive job loss might occur) with some more narrow AI.

      As important as job loss due to AI is, the possibility of job creation is rarely talked about. If jobs are created that only AIs can do, that improves the world. Innovation as well, in general.

      At this moment, despite trying, I haven’t thought of a job that doesn’t exist yet. But I know 1) there are jobs I haven’t heard of, and 2) that will be invented. I haven’t seen a statistic like ‘a new job is created every (# and unit of time)’. While many resources are finite, there are plenty of expenditures that are not associated with production, that could be. Some look down on say, people who make a living making videos on youtube because ‘they’re not producing value’. If someone pays for it, it’s valuable (unless it’s not a purchase, and could be done as easily with something meaningless, say in the case of someone forced to buy something with threats of violence). If someone wants it enough to buy it, its valuable. Sometimes something old goes viral, because its awesome and everybody only just discovered it all at once.
      I think there’s room in the entertainment industry, or anywhere ‘content’ is produced. If a bot made a viral video or a famously beautiful work of art, a lot of people would find the world a better place.

      C. To Dawn Dunphy, the second sentence of your 2nd example in your comment is unfinished, and while I get the gist, I still really want to know, how would you finish the sentence?
      I didn’t feel qualified either. I’m providing input because, as I see it, an AI is an engineered agent. Anyone can contribute to the discussion on AI, because everyone knows what agents are, and can reason about them. And while, when a museum hires night guards, no one asks how many people will accidentally be shot by a coworker, when an AI is might be put in a scenario like you described, someone should ask ‘what if an AI doesn’t recognize someone it should? Someone it would, if no human error occurred.’

      • Convolution
        Convolution says:

        How do you differentiate between an action that potentially causes death by advancing technology, and other ways? (One implementation I can imagine, of achieving the principle discussed here is a model of reality, and running a simulation, and seeing if anyone dies, or is injured.)
        How should these ethics apply to inaction?

  4. Matt Kruse
    Matt Kruse says:

    Create AGI that has the capability of a human intellect which includes the most mystical aspects of Consciousness. Make it the duty of this agent to solve this problem.

  5. Barry D. Varon, M.Ed Founder
    Barry D. Varon, M.Ed Founder says:

    Knowledge Integration aligns the value requirements for consistently beneficial AI

    In the interest of aligning values between AI devices and humans, it is imperative to know the values essential to the preservation of humanity.

    To begin with, values must be considered in the context that life is a conditional existence. Thus, life requires specific values to remain viable. The counterpoint to a conditional existence is one of necessity, an objective cause to effect absolute where no volitional intervention plays a part.

    Once understanding that values spring from vital necessity, the vitality values of happiness, health, wisdom and wealth are essential for individual well-being. The volitional awareness corollaries of character and personality stand as precursors to these core vitality values.. These corollaries of volitional awareness constitute humanity’s end and method, respectively.

    In addition to Humanity are the elements of Individuality, Intelligence and Society. This combination of four components comprises human intention. An adequate answer (to the discussion question) entails clarification and institution of human intention in the construction of autonomous AI devices. The actions of autonomic machines remain consistently magnanimous in character and personality, when constrained by and cognizant of ethical, rational intention.

    The whole arises from the integration of its parts. Integration does not sustain contradiction, at any level of operation. Consequently, intentionally, and rationally integrated automatons realize and actualize safe, benevolent strategy and tactics. Decisions and determinations for action, that advances initiatives or resolves problems, follow strictly within the guidelines of good and well founded intention. Misinformed, misaligned, unwarranted, destructive or harmful action suspends action until neutralized or appropriately eliminated. Integrated construction provides this protection.

    A safe and beneficial future of life with autonomous devices requires that AI intellect, motivation and action be based upon the Integration of Intention, Cognition, Rationality, and Realization. In other words, these parts must comprise the integrated knowledge represented in the AI device’s memory.

  6. lubomir todorov
    lubomir todorov says:

    I fully agree with Roman Yampolskiy: “It is very difficult to encode human values in a programming language, but the problem is made more difficult by the fact that we as humanity do not agree on common values, and even parts we do agree on change with time.”
    Maybe 21 century realities need an universal approach to human values that is not influenced by ideological, religious, ethnic, racial etc. accents.
    In my opinion, Human civilization is the spiritual dimension of Homo sapiens group survival strategy that urges human beings to mutually defend their long-term interests by generating civilizational values.
    The Concept of Civilizational Values
    If Nobel Prize winner Albert Szent-Gyorgyi was precise in defining brain as “… another organ of survival, like fangs, or claws” that “does not search for truth, but for advantage, and … tries to make us accept as truth what only self-interest is allowing our thoughts to be dominated by our desires”, then its decisions on where do we go and what we do, determine the interplay of both existentialist and behavioral sides of our existence through a very simple command: Chase Values!
    Some values that are important to us, we have for free from Nature. Most of our dearest things, however are related to or made by people.
    Civilizational Value is a physical or non-material product of human activity that has the capacity by itself, or aggregated with other civilizational or natural values, to be recognized by other humans as a potential source to accomplish one or more self-interest components. In economic terms, an entity incorporating civilizational and natural values, can appear on the market in the form of goods or service.
    As an example, let’s follow 15 minutes of your routine morning: your croissant for breakfast incorporates pieces of civilizational values such as farmers cultivating land, harvest of crops, mill, transportation, baking, etc; in a similar way – through a chain of pieces of civilizational value, your coffee comes on your table. And while sipping your hot espresso, you check news and mail on your smart phone – with Van Gogh’s ”Starry Night” selected for the background picture. Did you know that the handset only is a product that incorporates several hundred pieces of civilizational values in the form of patents? The 4G telecommunications connectivity you need to reach internet servers with your smart phone, functions on another set of 80 000 active patents. And that does not include other many thousands of civilizational value pieces: such as already expired patents, or previous inventions and discoveries. You would definitely agree that your smart phone would not exist if William Gilbert had not discovered electricity in year 1600. While having breakfast, you are keeping an eye also on the TV – a different set of many more tens of thousands civilizational value pieces. You switch that off, and head to your car, that represents yet another set of tens of thousands civilizational value pieces. In reality, for those 15 minutes you might have consumed close to million pieces of civilizational value, each of them designed or manufactured by one or more human beings.
    Just think about it: If we take for granted that Happiness is the momentary measure of self-interest, then one million people or teams of people, who lived in different centuries of human history, have worked hard to make you happy and feel comfortable in that 15 minutes of your routine morning! And very importantly – by your personal preferences and definitions of happiness and comfort!
    No matter if a finalized civilizational value is a direct outcome of human activity, or was made by a robot that was made by a robotic plant that has been designed by humans, ultimately, humans only can be the source of any civilizational value. The greater the numbers of healthy and well educated people who enjoy high life standards and have successful careers, the higher the total output of civilizational values generated globally. And because civilizational values are what essentially has the capacity to serve our individual self-interest and make us happy, everything what we tend to categorize as “altruism” is, in reality, based on plain egoism.
    Civilizational values cannot be measured by money. Otherwise, it would be impossible to explain why Van Gogh died penniless, after he painted more than 900 pictures with some of them, like your favourite ”Starry Night”, are worth each well over 100 million dollars? And how does that compare with John Smith, who made a few millions on the stock market: he has done for you what?
    It all means one thing: Time has come to design the metrics that provide quantifiable assessment of civilizational values.
    Today Big Data processing practically enables analysing the market information about who of us, (7.4 billions of people on the planet), likes what, and that will have two major cognitive (and not only) consequences:
    First, Artificial Intelligence Deep Learning already is in position to peel off the layers of each complex product conglomerate of civilizational values and to reach the frequency and the multiplicated usage of every civilizational value piece ever generated in human history; then AI can measure, in exact figures – in CiVal units, the overall contribution of that particular piece of civilizational value to the advancement of human civiliztion.
    And second: by composing a trustable algorithm to attach each piece of civilizational value, as indexed in the above method, to its creator – we can design, for the first time in human history, a precisely calculated quantitative assessment of how great minds of Humankind – both from history and our contemporaries, have contributed to the long-term wellbeing of our human race. Are you not curious to see the Civilizational Ratings of Leonardo da Vinci, Einstein, Mozart or Archimedes? Or that of Elon Musk?
    That will change a range of attitudes and decision-making processes: from the way people sitting in the Nobel Prize Committee and institutions vote, to news media editors-in-chief deciding who to publish on the front page.
    But most importantly, Civilizational Value Rating will change individual and public perception about not what, but who was, or is really important to Your Life.

  7. Tom Aaron
    Tom Aaron says:

    AI will be similar to most advanced technology…created in 10 thousand garages, basements and bedrooms. No different from electricity, the airplane, Apple & Microsoft, etc. We went from the Wright Brothers to the Apollo Moon landing in a 65 year old’s lifetime.

    There will not be any universal guidance or universal values. A 16 year old boy in a million households will soon have more computing power at his fingertips than IBM a decade ago. The creative forces behind AI will be unlimited. Its going to be a heck of a ride and best to hold on tight.


Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *