EA Global X Boston Conference

The first EA Global X conference, EAGxBoston, is being held at MIT on April 30th, 12:30-6:30pm. Boston EAs have created an incredible lineup bringing together a who’s who of researchers, EAs, EA orgs, and up-and-coming orgs including:
Dean Karlan (Yale, Innovations for Poverty Action)
Joshua Greene (Harvard, Moral Cognition Lab)
Rachel Glennerster (MIT, Poverty Action Lab)
Piali Mukhopadhyay (GiveDirectly)
Bruce Friedrich (The Good Food Institute)
Julia Wise (The Centre for Effective Altruism)
Ian Ross (Hampton Creek, Facebook)
Allison Smith (Animal Charity Evaluators)
Elizabeth Pearce (Boston University, Iodine Global Network)
Cher-Wen DeWitt (One Acre Fund)
Rhonda Zapatka (Trickle Up)
Elijah Goldberg (ImpactMatters)
Jason Ketola (MaxMind)
Lucia Sanchez (Innovations for Poverty Action)
Sharon Nunez Gough (Animal Equality)
Bruce Friedrich (The Good Food Institute, New Crop Capital)
Jon Camp (The Humane League)
Victoria Krakovna (Harvard, Future of Life Institute)
Eric Gastfriend (Harvard Business School EA, FLI, and formerly 80,000 Hours)
Dillon Bowen (Tufts EA, formerly 80,000 Hours and Giving What We Can)
Jason Trigg (earning-to-give at a startup and formerly as a hedge fund quant)
and more

The day will be filled with talks, panels, and networking opportunities. The program will address the major effective altruist cause areas of global health poverty and development, animal agriculture, and global catastrophic risk, as well as movement concerns like conducting research, building community, and choosing a career direction. We will also be introducing some up-and-coming organizations.

FLI’s Victoria Krakovna, Richard Mallah, and Lucas Perry participated in a panel about Global Catastrophic Risks.

More information and registration can be found on the conference website:
http://eagxboston.com

All proceeds after our minimum costs will be donated to EA charities. If you need a tax-receipt, please contact Randy Carlton <[masked]>. Please note that the early bird special ends on April 19th.

We have a limited amount of space, so if you’d like to join, please register today and share this invitation with interested friends via our Facebook group:
https://www.facebook.com/EAGxBoston/

Let’s get together, and learn what we can do even better together!

EAGxBoston Team from MIT Sloan EA, MIT EA, Tufts EA, Harvard EA, HBS EA, Animal Charity Evaluators and The Commonwealth Market
http://eagxboston.com

Policy Exchange: Co-organized with CSER

This event occurred on September 1, 2015.

When one of the world’s leading experts in Artificial Intelligence makes a speech suggesting that a third of existing British jobs could be made obsolete by automation, it is time for think tanks and the policymaking community to take notice. This observation – by Associate Professor of Machine Learning at the University of Oxford, Michael Osborne – was one of many thought provoking comments made at a special event on the policy implications of the rise of AI we held this week with the Cambridge Centre for the Study of Existential Risk. 

The event formed part of Policy Exchange’s long-running research programme looking at reforms that are needed in policies relating to welfare and the workplace – as well as other long-term challenges facing the country. We gathered together the world’s leading authorities on AI to consider the rise of this new technology, which will form one of the greatest challenges facing our society in this century. The speakers were the following:  

•    Huw Price, Bertrand Russell Professor of Philosophy and a Fellow of Trinity College at the University of Cambridge, and co-founder of the Centre for the Study of Existential Risk.

•    Stuart Russell, Professor at the University of California at Berkeley and author of the standard textbook on AI.

•    Nick Bostrom, Professor at Oxford University, author of “Superintelligence” and Founding Director of the Future of Humanity Institute.

•    Michael Osborne. Associate Professor at Oxford University and co-director of the Oxford Martin programme on Technology and Employment.  

•    Murray Shanahan. Professor at Imperial College London, scientific advisor to the film Ex Machina, and author of “The Technological Singularity”

 policy_exchange_FLI_CSER

In this bulletin, we ask two of our speakers to share their views about the potential and risks from future AI:

Michael Osborne: Machine Learning and the Future of Work 

Machine learning is the study of algorithms that can learn and act. Why use a machine when we already have over six billion humans to choose from? One reason is that algorithms are significantly cheaper, and becoming ever more so.  But just as important, algorithms can often do better than humans, avoiding the biases that taint human decision making.  

There are big benefits to be gained from the rise of the algorithms. Big data is already leading to programs that can handle increasingly sophisticated tasks, such as translation. Computational health informatics will transform health monitoring, allowing us to release patients from their hospital beds much earlier and freeing up resources in the NHS. Self-driving cars will allow us to cut down on the 90% of traffic accidents caused by human error, while the data generated by their constant monitoring of the impact will have big consequences for mapping, insurance, and the law.

Nevertheless, there will be big challenges from the disruption automation creates. New technologies derived from mobile machine learning and robotics threaten employment in logistics, sales and clerical occupations.  Over the next few decades, 47% of jobs in America are at high risk of automation, and 35% of jobs in the UK. Worse, it will be the already vulnerable who are most at risk, while high-skilled jobs are relatively resistant to computerisation. New jobs are emerging to replace the old, but only slowly – only 0.5% of the US workforce is employed in new industries created in the 21st century. 

Policy makers are going to have to do more to ensure that we can all share in the great prosperity promised by technology.

Stuart Russell: Killer Robots, the End of Humanity, and All That: Policy Implications 

Everything civilisation offers is the product of intelligence. If we can use AI to amplify our intelligence, the benefits to humanity are potentially immeasurable. 

The good news is that progress is accelerating. Solid theoretical foundations, more data and computing, and huge investments from private industry have created a virtuous cycle of iterative improvement. On the current trajectory, further real-world impact is inevitable.

Of course, not all impact is good. As technology companies unveil ever more impressive demos, newspapers have been full of headlines warning of killer robots, or the loss of half of all jobs, or even the end of humanity. But how credible exactly are these nightmare scenarios? The short answer is we should not panic, but there are real risks that are worth taking seriously.

In the short term, lethal autonomous weapons or weapon systems that can select and fire upon targets on their own are worth taking seriously. According to defence experts, including the Ministry of Defence, these are probably feasible now, and they have already been the subject of three UN meetings in 2014-5. In the future, they are likely to be relatively cheap to mass produce, potentially making them much harder to control or contain than nuclear weapons. A recent open letter from 3,000 AI researchers argued for a total ban on the technology to prevent the start of a new arms race.

Looking further ahead, what, however, if we succeed in creating an AI system that can make decisions as well, or even significantly better than humans? The first thing to say is that we are several conceptual breakthroughs away from constructing such a general artificial intelligence, as compared to the more specific algorithms needed for an autonomous weapon or self-driving car. 

It is highly unlikely that we will be able to create such an AI within the next five to ten years, but then conceptual breakthroughs are by their very nature hard to predict. The day before Leo Szilard conceived of neutron-induced chain reactions, the key to nuclear power, Lord Rutherford was claiming that, “anyone who looks for a source of power in the transformation of the atoms is talking moonshine.”

The danger from such a “superintelligent” AI is that it would not by default share the same goals as us. Even if we could agree amongst ourselves what the best human values were, we do not understand how to reliably formalise them into a programme. If we accidentally give a superintelligent AI the wrong goals, it could prove very difficult to stop. 

For example, for many benign-sounding final goals we might try to give the computer, two plausible intermediate goals for an AI are to gain as many physical resources as possible and to refuse to allow itself to be terminated. We might think that we are just asking the machine to calculate as many digits of pi as possible, but it could judge that the best way to do so is turn the whole Earth into a supercomputer.

In short, we are moving with increasing speed towards what could be the biggest event in human history. Like global warming, there are significant uncertainties involved and the pain might not come for another few decades – but equally like global warming, the sooner we start to look at potential solutions the more likely we are to be successful. Given the complexity of the problem, we need much more technical research on what an answer might look like.

AI conference

This event was held January 2-5, 2015 in San Juan, Puerto Rico.

We organized our first conference, The Future of AI: Opportunities and Challenges. This conference brought together the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. To facilitate candid and constructive discussions, there was no media present and Chatham House Rules: nobody’s talks or statements will be shared without their permission.

Most of the speakers have posted their talks. You’ll find a list of participants and their bios here.

Martin Rees: Catastrophic Risks: The Downsides of Advancing Technology

This event was held Thursday, November 6, 2014 in Harvard auditorium Jefferson Hall 250.

Our Earth is 45 million centuries old. But this century is the first when one species ours can determine the biosphere’s fate. Threats from the collective “footprint” of 9 billion people seeking food, resources and energy are widely discussed. But less well studied is the potential vulnerability of our globally-linked society to the unintended consequences of powerful technologies not only nuclear, but (even more) biotech, advanced AI, geo-engineering and so forth. More information here.

Nick Bostrom: Superintelligence — Paths, Dangers, Strategies

This event was held Thursday, September 4th, 2014 in Harvard auditorium Emerson 105.

What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? In his new book – Superintelligence: Paths, Dangers, Strategies – Professor Bostrom explores these questions, laying the foundation for understanding the future of humanity and intelligent life.

Photos from the talk

Max Tegmark: “Ask Max Anything” on Reddit

 This event was held Wednesday, August 20th, 2014, in the “IAmA” subreddit on reddit.com. Read it here!

Max Tegmark answers the questions of reddit.com’s user base! Questions are on the subject of his book “Our Mathematical Universe”, physics, x-risks, AI safety, and AI research.

The Future of Technology: Benefits and Risks

This event was held Saturday May 24, 2014 at 7pm in MIT auditorium 10-250 – see videotranscript and photos below.

The coming decades promise dramatic progress in technologies from synthetic biology to artificial intelligence, with both great benefits and great risks. Please watch the video below for a fascinating discussion about what we can do now to improve the chances of reaping the benefits and avoiding the risks, moderated by Alan Alda and featuring George Church (synthetic biology), Ting Wu (personal genetics), Andrew McAfee(second machine age, economic bounty and disparity), Frank Wilczek (near-term AI and autonomous weapons) and Jaan Tallinn (long-term AI and singularity scenarios).

  • Alan Alda is an Oscar-nominated actor, writer, director, and science communicator, whose contributions range from M*A*S*H to Scientific American Frontiers.
  • George Church is a professor of genetics at Harvard Medical School, initiated the Personal Genome Project, and invented DNA array synthesizers.
  • Andrew McAfee is Associate Director of the MIT Center for Digital Business and author of the New York Times bestseller The Second Machine Age.
  • Jaan Tallinn is a founding engineer of Skype and philanthropically supports numerous research organizations aimed at reducing existential risk.
  • Frank Wilczek is a physics professor at MIT and a 2004 Nobel laureate for his work on the strong nuclear force.
  • Ting Wu is a professor of Genetics at Harvard Medical School and Director of the Personal Genetics Education project.
Photos from the talk