close menu

Hawking, Musk, and Wozniak Sign Open Letter to Stop Skynet

Gun powder was the first revolution in modern warfare, making the sword and shield, the bow and arrow, effectively obsolete. The next was nuclear weapons, able to destroy not individual enemies, but whole cities. The third, warned the International Joint Conference on Artificial Intelligence at meeting this week in Buenos Aires, Argentina, will likely be war waged by artificial intelligence.

The conference’s open letter, signed by over 1,000 A.I. researchers and the likes of theoretical physicist Stephen Hawking, Tesla and SpaceX CEO Elon Musk, and Apple co-founder Steve Wozniak, urges a “ban on offensive autonomous weapons beyond meaningful human control.”

In other words, we need to get out ahead of Skynet before it happens.

Autonomous weapons have no human control—an A.I. weapons system could seek out and eliminate targets based on programming and processing power alone, like an armed quadcopter scanning for faces on a battlefield, for example.

The argument for arming A.I.s is that the technology could reduce the number of human soldiers put into combat, and therefore reduce the loss of life, but conference’s signatories believe that benefit would be quickly eclipsed by an exponentially accelerating arms race.

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,” they write. “…The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs [AK-47s] of tomorrow.”

Curtailing weaponized artificial intelligence wouldn’t be like current restrictions on building an atom bomb. Nuclear material is hard to acquire, and the construction of a nuke needs specialized equipment. On the other hand, “[autonomous weapons] require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.”

That’s why we have to tread carefully, or ban outright, the development and spread of technology to create these weapons, or so the conference’s signatories believe. They imagine targeted assassinations and suppressive regimes controlling a populace with Terminators. Literally.

And like the chemists or nuclear physicists who have no interest in using science to make weapons, most A.I. researchers have no desire to create a Skynet, the letter assures. Artificial intelligence could revolutionize human life without also learning new ways to take it. The signatories believe that we are rapidly coming to this crossroads; the next revolution in weaponry is “feasible within years, not decades,” and now is the time to choose the right path.

“In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so,” the letter concludes. “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

HT: The Guardian

Tickling the Dragon’s Tail: The Story of the “Demon Core”

Tickling the Dragon’s Tail: The Story of the “Demon Core”

article
EnCRYPTed: Dead Right

EnCRYPTed: Dead Right

article
Top Ten Space Cartoons

Top Ten Space Cartoons

article