Drone Swarms Are Getting Too Fast For Humans To Fight, U.S. General Warns

Authored by forbes.com and submitted by speckz
image for Drone Swarms Are Getting Too Fast For Humans To Fight, U.S. General Warns

General John Murray, head of Army Futures Command, told a webinar audience at the Center for Strategic & International Studies that humans may not be able to fight swarms of enemy drones, and that the rules governing human control over artificial intelligence might need to be relaxed.

"When you are defending against a drone swarm, a human may be required to make that first decision, but I am just not sure any human can keep up," said Murray. "How much human involvement do you actually need when you are [making] nonlethal decisions from a human standpoint?"

A swarm of 75 drones, including kamikaze attack drones, demonstrated recently by the Indian Army. ... [+] 1,000-strong drone swarms are their next goal. Indian Army

This indicates a new interpretation of the Pentagon’s rules on the use of autonomous weapons. These require meaningful human control over any lethal system, though that may be in a supervisory role rather than direct control – termed ‘human-on-the-loop’ rather than ‘human-in-the-loop.’

Murray said that Pentagon leaders need to lead a discussion on how much human control of AI is needed to be safe but still effective, especially in the context of countering new threats such as drone swarms. Such swarms are likely to synchronize their attacks so the assault comes in all directions at once, with the aim over overwhelming air defenses. Military swarms of a few hundred drones have already been demonstrated, in future we are likely to see swarms of thousands, or more. One U.S. Navy project envisages having to counter up to a million drones at once.

The U.S. Army is spending a billion dollars on new air defense vehicles known as IM-SHORAD with cannon, two types of missile, jammers, and future options of laser and interceptor drones. Using the right weapon against the right target at the right time will be vital. Faced with large numbers of incoming threats, many of which may be decoys, human gunners are likely to be overtaxed. Murray said that the Army’s standard test involving flashcard identification requires an 80% pass rate. During the recent Project Convergence exercise, artificial intelligence software boosted this to 98% or 99%, according to Murray.

This is not the first time that the Army Future Command has suggested that humans on their own may be outclassed. In a briefing on the DARPA-Army program called SESU (System-of-Systems Enhanced Small Unit), which teamed infantry with a mix of drones and ground robots, scientists noted that the human operators kept wanting to interfere with the robots’ actions. Attempts to micromanage the machines degraded their performance.

“If you have to transmit an image of the target, let the human look at it, and wait for the human to hit the “fire” button, that is an eternity at machine speed,” said one scientist, speaking on condition of anonymity. “If we slow the AI to human speed …we’re going to lose.”

AI is in the ascendant. The 5-0 victory over a human pilot in a virtual dogfight last August is still being debated, but there is no doubting that machines have faster reflexes, and ability to keep track of several things at once, and are not troubled by the fatigue or fear that can lead to poor decisions in combat.

There are two responses to this. One is to try and control AI and keep it away from the battlefield, given that that machines lack human ethical sense. The Campaign to Stop Killer Robots has long argued the case against autonomous weapons and the EU seems to agree. Last week the European Parliament set out its position: “The decision to select a target and take lethal action using an autonomous weapon system must always be made by a human exercising meaningful control and judgement, in line with the principles of proportionality and necessity.” In other words, autonomous weapons making their own decisions should be outlawed.

However, the U.S. appears to take a different line. A government-appointed panel reporting to Congress suggested this week that — as indicated by Murray’s comments – AI is likely to make fewer mistakes than humans and would be better at identifying targets.

“It is a moral imperative to at least pursue this hypothesis,” said panel vice-chairman, Robert Work, former deputy secretary of defense. He argues that autonomous weapons would lead to reduced casualties due to target misidentification.

Behind this, there is the military argument. If AI-controlled weapons can defeat those operated by humans, then whoever has the AIs will win and failing to deploy them means accepting defeat.

Debate still swirls around this topic. The emergence of drone swarms and other types of weapons that cannot be defeated by humans alone will crystalize it. However, it is not clear whether the legal argument will be able to keep up with technology, given how long it has already been going on. At this rate, large-scale AI-powered swarm weapons may be used in action before the debate is concluded. The big question is which nations will have them first.

F1F2F on January 29th, 2021 at 14:35 UTC »

The 2020 Nagorno-Karabakh war was an early clue. Armenia was quickly defeated by Azerbaijan in 5 weeks of fighting, largely with fairly unsophisticated "Suicide Drones".

The US has some remarkable infantry-carried drones that enable an infantryman to kill tanks 80 KM away, all on his own, launched from a tube like a mortar tube. https://www.flightglobal.com/military-uavs/aerovironment-unveils-anti-armour-switchblade-600-loitering-munition/140409.article The thing is powered by an electric motor and is probably practically silent, too.

We are going to start seeing these used for assassinations too.

pawned79 on January 29th, 2021 at 14:10 UTC »

He’s saying literally humans fighting; as in having a chain of command with engagement rules. When the Counter Rocket Artillery Mortar (C-RAM) engages incoming munitions with its LPWS machine guns, there is a procedure to ensure what they are shooting at is not actually a friendly helicopter or something. Since the drones are non-ballistic and their flight path can be abruptly changed easily, they’re “too fast” for this traditional engagement. He alludes counter drone engagement would have low chances of resulting in an unexpected human fatality, then it might be permissible and appropriate to use fully automatic systems.

Edit: Here’s a list of great technology ideas peopled have mentioned: guns, missiles, flak, directed energy, nets, air cannons, drones, EMP, and hawks. Love it! Fantastic! I do want to reiterate though that his comment was about automated vs. human-in-the-loop operations, not a technology comment. The issue is about receiving weapons release authorization in a timely fashion. Typically (being debated below), the DOD requires a weapons free call from a person before engaging a target to minimize unexpected casualties and collateral damage. The proposition here is that the mode of engagement to fight drones (small, inexpensive, non-ballistic, but dangerous) would be an automated weapon system with minimum chances to cause unintended harm.

NeedsMoreSpaceships on January 29th, 2021 at 13:24 UTC »

This was inevitable to anyone paying attention. I also think any attempt to limit AI warfare is doomed to failure, it's not like nuclear weapons that need a massive industry and detectable test detonations, you can do it with small teams in secret labs. As soon as serious war is declared the tech will 'magically' appear even though it's apparently not been in development.