benefits and risks of artificial intelligence

Who’s In Control?

The Washington Post just asked one of the most important questions in the field of artificial intelligence: “Are we fully in control of our technology?”

There are plenty of other questions about artificial intelligence that are currently attracting media attention, such as: Is superintelligence imminent and will it kill us all? As necessary as it is to consider those questions now, there are others that are equally relevant and timely, but often overlooked by the press:

How much destruction could be caused today — or within the next few years — by something as simple as an error in the algorithm?

Is the development of autonomous weapons worth the risk of an AI arms race and all the other risks it creates?

And…

How much better could life get if we design the right artificial intelligence?

Joel Achenbach, the author of the Washington Post article, considers these questions and more, as he writes about his interviews with people like Nick Bostrom, Stuart Russell, Marvin Minsky, our very own Max Tegmark, and many other leading AI researchers. Achenbach provides a balanced look at artificial intelligence, as he talks about Bostroms hopes and concerns, the current state of AI research, what the future might hold, and the many accomplishments of FLI this year.

Read the full story here.

 

 

1 reply
  1. Mindey
    Mindey says:

    In my opinion, most of the control is concentrated the intelligence agencies and military-industrial complexes, their contractors and subsidiaries.

    However, this control may not be for very long. Suzanne Sadedin seems to be right saying that there exists a risk of supercharging the capitalist competition-driven world with open AI technologies ( http://qz.com/580080/evolutionary-biologist-elon-musk-is-right-about-the-threat-of-ai-but-hes-wrong-about-why/ ), and this could lead to strong competitive adversaries to existing control structures.

    Looking for human perspective though, it looks like competition should be not for humans at all, it should be for ideas, and we should do selection of ideas to cooperate on rather than do selection of humans and their groups for survival.

    As mankind, we can cooperate by moving the competition away from the level of projects, and into the level of ideas. Once we decide on something at the level of ideas, we could create multiple projects trying to realize them with various different technologies of people’s liking, in the spirit of doing experiments, where every experimenter is paid, regardless if their startup fails or not.

    However, I think these agreements at the idea level should be on international and inter-lingual level for this to work, because even one country running wild competition evolutionary experiments in its society could potentially evolve technologies dangerous to the whole world.

Comments are closed.