From the WP: How do you teach a machine to be moral?

In case you missed it…

Francesca Rossi, member of the FLI scientific advisory board and one of 37 recipients of the AI safety research program, recently wrote an article for the Washington Post in which she describes the challenges associated with building an artificial intelligence that has the same ethics and morals as people. In the article, she highlights her work, which includes a team of not just AI researchers, but also philosophers and psychologists, who are working together to teach AI to be both trustworthy and trusted by the people it will work with.

Learn more about Rossi’s work here.

1 reply
  1. Mindey
    Mindey says:

    Article: “Optimization with constraints! We can do it!”

    I don’t think so. A.I. could respect constraints and optimizes for its own goals… Actually, human minds are nothing more than optimizers, too, and there are plenty of examples of how politicians act with accordance to principles while optimizing for their own agendas. So, the real problem seems to be that mankind does not have a way for its all people together to define the well-define goals for mankind.

    If we had the way to agree on the goals of mankind, the problem of A.I. would be easier, because we could then build optimizers that focus on narrow-scope-problems that we define, without requirement of any localized open super-intelligence.

    I think, the self-learning neural networks (deep learning tech) shows, that intelligence will behave much like nuclear material — a neural net with critical computing power and connectivity will tend to yield uncontrollable super-intelligences.

    So, a way to go would be to define our goals and use sub-super-intelligences (e.g., ones that don’t cross the “critical mass” barrier) to optimize for it, while taking care that communication links do not allow for the “criticality accidents”.

Comments are closed.