The Biggest Companies in AI Partner to Keep AI Safe

Industry leaders in the world of artificial intelligence just announced the Partnership on AI.  This exciting new partnership was “established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.”

The partnership is currently co-chaired by Mustafa Suleyman with DeepMind and Eric Horvitz with Microsoft. Other leaders of the partnership include: FLI’s Science Advisory Board Member Francesca Rossi, who is also a research scientist at IBM; Ralf Herbrich with Amazon; Greg Corrado with Google; and Yann LeCun with Facebook.

Though the initial group members were announced yesterday, the collaboration anticipates increased participation, announcing in their press release that “academics, non-profits, and specialists in policy and ethics will be invited to join the Board of the organization.”

The press release further described the objectives of the new partnership saying:

“AI technologies hold tremendous potential to improve many aspects of life, ranging from healthcare, education, and manufacturing to home automation and transportation. Through rigorous research, the development of best practices, and an open and transparent dialogue, the founding members of the Partnership on AI hope to maximize this potential and ensure it benefits as many people as possible.”

Of the partnership, Rossi said:

“Over the past five years, we’ve seen tremendous advances in the deployment of AI and cognitive computing technologies, ranging from useful consumer apps to transforming some of the world’s most complex industries, including healthcare, financial services, commerce, and the Internet of Things. This partnership will provide consumer and industrial users of cognitive systems a vital voice in the advancement of the defining technology of this century – one that will foster collaboration between people and machines to solve some of the world’s most enduring problems – in a way that is both trustworthy and beneficial.”

Suleyman also said:

“Google and DeepMind strongly support an open, collaborative process for developing AI. This group is a huge step forward, breaking down barriers for AI teams to share best practices, research ways to maximize societal benefits, and tackle ethical concerns, and make it easier for those in other fields to engage with everyone’s work. We’re really proud of how this has come together, and we’re looking forward to working with everyone inside and outside the Partnership on Artificial Intelligence to make sure AI has the broad and transformative impact we all want to see.”

The Partnership on AI also reached out to other members of the AI community for feedback. FLI Science Advisory Board member Nick Bostrom said, “AI is set to have transformative impacts on society over the coming years and decades. It is therefore encouraging that the industry is taking the initiative to create a forum in which technology leaders can share best practices and discuss what it means to be a responsible innovator in this burgeoning field.”

Vicki L. Hanson, President of the Association for Computing Machinery, added:

“The Partnership on AI initiative could not come at a better time. Artificial Intelligence technologies are increasingly becoming part of our daily lives, and AI will significantly impact society in the years ahead. Fostering a shared dialogue and building common cause is crucial. We look forward to working with the Partnership on AI to educate the public and ensure that these technologies serve humanity in beneficial and responsible ways.”


2 replies
  1. Lung Chuan CHEN
    Lung Chuan CHEN says:

    I am the one who said “Pay attention to conflicts between/among smart machines” for a long long time. I noticed this news about two weeks ago. My response to this:

    This could be “heaven” – if they are based on open-mind, altruism and sharing together;


    This could be “hell” – if they have a hidden heart of selfishness, greed and domination together.

  2. Kirk Fraser
    Kirk Fraser says:

    A superior ethic for robots is:
    1) An ethical standard of pursuing absolute perfection, applied with each incremental discovery and development. Absolute perfection is superior to perfection in any one field, region, or area of study.
    2) Universal Basic Robots (UBR) where a robot is deployed and embedded with each family worldwide to grow food, feed, educate, entertain, and protect everyone cradle to Ph.D. in a self sufficient cooperative partnership. This is superior to Universal Basic Income (UBI) which seeks to mitigate robots taking over the economy, by giving everyone nearly total self sufficient ability to produce the benefits of the economy, reducing collateral deaths due to disasters and money devaluations. The beauty of UBR is it can continue research at the Ph.D. level in every home and share new knowledge via internet so eventually previously external problems can be solved in home too, like a new organ might be grown from the patient’s own stem cells.
    3) Integration of AI with the Christian Bible. Teaching people in the same language, culture, and Christian religion can end world hunger, poverty, illiteracy, crime, terrorism, and war. Other popular religions cannot, especially Islam where adherents kill each other as well as outsiders. Children should be taught to recite, study, and build on the Gospel of Jesus Christ to develop ethical faith for spiritual interaction.

    Inferior ethics for robots include:
    1) DARPA’s military ethic which boils down to an ethical kill is one which will not result in a Neuenburg trial.
    2) Oregon Government Ethics Commission which only investigates direct bribery. It refused a case where a city attorney profiting from the constant flow of derelicts into his court got on the school board promising no changes which could reduce the flow of derelicts into his court.
    3) Neural Network (NN) based cognative software typically achieves only 90% accuracy which can produce illogical text and poor results including death.
    4) A Muslim robot programmed by a Chinese man to kill people who do not worship it or its leader Caliph in the coming Mideast Caliphate predicted in the Bible Revelation 13 as the Image of the Beast (and set up in the Temple in Jerusalem).

    Neural Network AI can help a robot see and hear fast enough for real time but it cannot do cognitive symbolic thought needed for the best ethical standard. At industry standard programming speed of 10 debugged lines of code per day, if a thinking AI could be done in 100K lines of code that would take 38.5 years or about 40 programmers one year, I suggest these companies hire me to start an ethical AI capable of Ph.D. work and beyond.

Comments are closed.