Andrew Yang warns against ‘slaughterbots’ and urges global ban on autonomous weaponry

Authored by venturebeat.com and submitted by memes4_all

Ahead of the Democratic presidential primaries that begin Monday with the Iowa caucus, presidential candidate Andrew Yang called for a global ban on the use of autonomous weaponry. In a tweet Thursday, Yang called for U.S. leadership to implement a ban on automated killing machines, then shared a link to a Future of Life Institute video titled “Slaughterbots,” which offers a cautionary and dystopian vision of the future.

Yang has been the most vocal candidate in the 2020 race about AI and how it will change people’s lives. During his campaign, he’s addressed how the technology will impact the future of work, job loss, and national security.

In debates leading up to the primaries, Yang pledged to give $1,000 a month to a select few individuals as part of his universal basic income (UBI) experiment and said his first call as president would be to China — partly to work together on issues like AI.

The video was first released in 2017 by the Future of Life Institute’s Campaign ahead of a UN Convention on Conventional Weapons. The opens with a tech CEO-type giving a keynote address where he unveils autonomous drones equipped with facial recognition and AI piloting systems that kill people with head shots via a small amount of explosives.

In the video, the fictional CEO promises the ability to target and wipe out “the bad guys” or people with “evil ideology” or even entire cities.

The video then imagines the breaking out of partisan political warfare. The drones are used to assassinate 11 U.S. Senators of one political party at the U.S. Capitol building. In the wake of the hypothetical attack, it’s unclear after assessment from the intelligence community what state, group, or individual carried it out, but in the confusion calls for war and violent crime ratchet up.

There is some precedent in reality.

Russian company Kalishnakov is developing a kamikaze drone, and though it was most likely piloted by a human, the world saw one of the first targeted political assassination attempts with a drone in history in 2018 in Venezuela. DARPA is developing ways for swarms of drones to take part in military missions, and the U.S. Department of Defense developed hardware to guard against weaponized drone attacks.

The FAA warned U.S. citizens last year they face a $25,000 fine for weaponizing drones, but in the video, the autonomous lethal drones lead to a general increase in violent crimes, people feeling unsafe in their homes, and the destabilization of society.

The video ends with commentary from UC Berkeley professor Stewart Russell, who is not a fictional person. He now serves as vice chair of the World Economic Forum AI and Robotics. “Allowing machines to choose to kill humans would be devastating to our security and freedom,” Russell said.

Alongside Russell, the Fight for the Future petition first circulated in 2015 in support of an autonomous weapons ban and now has thousands of signatories, including nearly 250 businesses and organizations and people like Google AI chief Jeff Dean, DeepMind cofounder Demis Hassabis, and Elon Musk.

In a recent interview with VentureBeat, former cybersecurity director and White House economic advisor during the Obama administration R. David Edelman implored the Trump administration to engage in direct bilateral talks with Chinese leaders to avoid confusion sparked by malicious AI that could lead to war or acceleration of an arms race.

A draft report released last fall by the National Security Council on AI advising Congress said that AI supremacy is essential to U.S. national security and the economy, while the Chinese government is investing billions to become the world leader in AI by 2030. A Brookings Institute fellow recently predicted that the world leader in AI by 2030 will be the dominant global power until 2100.

KeeganMakesKief on February 1st, 2020 at 04:46 UTC »

Yeah sure, but what do we do when the people who already have the AI death bots just say no? The laws regarding warfare are usually themselves the first casualties of war.

TheSholvaJaffa on February 1st, 2020 at 04:41 UTC »

It really do feel like 2020 when yang is speaking of all this futurism stuff

Nintenfan81 on February 1st, 2020 at 01:58 UTC »

I thought this meant automatic weapons instead of self-directed war machines and I was utterly baffled for a few moments.

Yeah, AI death robots are probably a slope we don't want to start sliding on.