Trending Topics

State your case: Should police be permitted to use robots to deliver lethal force?

In a surprise move, the San Francisco Board of Supervisors recently voted to allow San Francisco PD to deploy robots equipped to deliver lethal force

Robot.JPG

UPDATE: On Dec. 6, San Francisco supervisors voted to put the brakes on a policy that would have let police use robots for deadly force, reversing course just days after their approval of the plan.

On November 3, the San Francisco Board of Supervisors voted to give the San Francisco Police Department the ability to use remote-controlled robots equipped with explosive charges to contact, incapacitate, or disorient violent, armed, or dangerous suspect when lives are at stake. “Robots equipped in this manner would only be used in extreme circumstances to save or prevent further loss of innocent lives,” SFPD spokesperson Allison Maxie said in a statement.

This news may have been surprising to many, especially as there had been significant public outcry when SFPD first rolled out its draft robot policy. Just a month before and only a few miles away, the Oakland Police Department announced that while they had participated in discussions with the Oakland Police Commission and community members to explore getting robots armed with shotguns, they had decided the department no longer wanted to explore that particular option.

We first saw a robot fatally wound a suspect in 2016, after the Dallas Police Department rigged a bomb-disposal robot to kill an armed suspect who had murdered five Dallas police officers. “Other options would have exposed our officers to great danger,” said former Dallas Police Chief David Brown at the time.

Should police be allowed to use robots to deliver lethal force? Our experts debate the issue. Share your thoughts in the box below.

The ground rules: As in an actual debate, the pro and con sides are assigned randomly as an exercise in critical thinking and analyzing problems from different perspectives.

Our debaters: Jim Dudley, a 32-year veteran of the San Francisco Police Department where he retired as deputy chief of the Patrol Bureau, and Chief Joel Shults, EdD, who retired as chief of police in Colorado.

Jim Dudley: Although the headline in the San Francisco Newspaper read “Killer Robots OK’d for SFPD” the truth is, we have had Explosive Ordnance Disposal (EOD) robots for decades at the San Francisco Police Department. The issue became a headline when the department asked for approval to arm the robots with a possible explosive device to neutralize an armed active shooter in extreme situations. I am in agreement with the decision.

The department justified the request based on the rise in mass shootings over the past few years. Over 600 mass shootings have occurred in the United States so far in 2022 – more than double the previous year. Certainly, there is an expectation that law enforcement officers will advance toward an active shooter, but some situations dictate that that is not always possible in extreme situations. Think about the Route 91 Harvest Festival shooting in Las Vegas, where the armed and barricaded suspect fired upon music festival attendees, killing 60 and wounding another 500. Consider the North Hollywood bank robbery, where suspects wearing body armor used fully automatic weapons to fire indiscriminately at police and civilians, wounding several of them, in a raging gun battle for over 40 minutes. There are so many more instances where sending in a robot to address the active shooter would have ended the threat without further carnage.

Joel Shults: The only thing that a robot adds to any law enforcement operation is distance. And therein lies the ethical quandary.

Few would argue against using a drone to surveil a crime scene, or a robot to deliver a message to a barricaded suspect. Using either of those devices to deliver deadly force is a decidedly different proposition.

Like other ethical issues (we can talk about tactical issues later), we can always justify a singular incident as an exception to the rule for a higher moral purpose. When the Dallas Police Department sent a robot in to kill the murderer of five police officers after an hours-long stand-off in 2016, the world gasped, then shrugged its shoulders. It seemed like a good idea at the time and eliminated a clearly dangerous threat. The robot was fitted with an explosive device, rather than a firearm, but deadly force is deadly force whether it results from a .45 slug, a patrol vehicle bumper, or a bomb, but the public is sensitive to what they perceive as “overkill.”

When Philadelphia police dropped a bomb from a helicopter to end a standoff in a residential neighborhood in 1985, 11 occupants of the suspect house were killed and the resulting fire spread destroying 61 homes. Not a robot, but a resolution using a remote device.

The infamous Waco compound of David Koresh was destroyed by fire after tear gas was injected into the structure from armored vehicles (tanks by anyone’s description). Granted, it is probably more likely that the fires were started and fueled by the Davidians themselves although that remains in dispute, but the comparison here is that the operation was capped by what was essentially an application of force by remote, non-human means.

Both of these were, of course, extreme and anomalous situations. In Philadelphia, the police had already expended over 10,000 rounds of ammunition in the extended gun battle in an attempt to serve a search warrant. In Waco, four ATF agents had been killed in an attempt at a tactical entry to serve a warrant and the siege had lasted 51 days. We can say that a human being will always be the one deciding on pulling the robot’s trigger, but it will always be from a distance and never with a full view – notwithstanding advanced audio and video being beamed to the operator. The tactical hypotheticals include whether a remote device can be hacked and control wrested from law enforcement and whether introducing a weapon into a suspect’s presence can be defeated and used against others. I’m just not sure we’re ready to strap a Glock on every remote tanked, wheeled, winged, or four-legged automaton in our tool kits.

Jim Dudley: Great points, Joel, but let’s compare apples to apples. Of course, those opposing the robot idea are probably those who shouted the loudest calls for de-funding. They are the same who believe that cops should sacrifice themselves and charge the shooters – with or without hostages. There will never be a foolproof-enough plan to appease them.

That said, law enforcement officers rush in to confront active shooters, or within “hot zones” where they are being fired upon. In most cases described, neither fire department nor EMS personnel would enter to address the wounded victims. Indeed, in long drawn-out scenarios, many of the victims are left to bleed out, since rescuers would be prevented from evacuating them or applying field aid. I hope we start developing rescue robots to address those situations.

The SFPD plan cited only the “most violent and extreme” situations where the lethal force option would be considered. Your idea of hacking a remote is a real possibility, but that could be addressed with sophisticated software or simply hard-wiring the device. Only the 1st or 2nd in command will authorize the use. When it comes to lethal force, there shouldn’t be an autonomous robot in play.

Without exposing some already nuanced technology in use in policing today, I’m comfortable with having this option available. Clearly, a majority of elected officials in arguably one of the most liberal cities in the country agree.

With any luck, we would never have to resort to using this apparatus, but then again, luck should never be part of any plan to address mass shooters.

Joel Shults: I didn’t mention the apparent irony of this decision coming out of San Francisco of all places. It seems our west coast states don’t want much police action at all, but they are OK with killer bots!

I suppose as we imagine the extreme circumstances envisioned by the San Francisco Board of Supervisors would be a decision where most everyone would cheer. The slippery slope question is not merely speculative, however. Here are some things to consider:

  • Will a deadly force by humans now be called into question when the bot option isn’t used or available?
  • Will the decision by committee (you can imagine the lawyers, politicians and top brass peering at the monitor in the command post looking like the White House situation room when Bin Laden was killed) become the expected norm?
  • Will second-guessing and legal challenges clog up the works on this potential tool in the near future?
  • Will manufacturers add trigger fingers to robotic arms or have built-in weaponry integrated into designs?
  • Will a non-emotional, non-stressed robot be expected to deliver a bullet to a pinpoint non-lethal body part using laser targeting? If so, will shoot to wound become a standard protocol?
  • Must robots be equipped with less lethal options such as press the red button for 9mm, blue button for buckshot, yellow button for TASER probes, green button for bean bag?
  • Will drones be firearms equipped?
  • Or will explosives be the primary mode of lethal force delivery rather than a bullet?

Our evolving theories on immediate response to active harmers certainly should include remote options – I can’t argue against that. Our current thinking of the first officer on scene shooting their way in to kill the offender (I’m sorry, I meant “stop the threat”) is inclusive of suicidal heroism that may not be the most effective tactical response, but that’s another article. But we all know that seconds count and robots simply won’t be the first responder and will take a relatively long time to deploy. My concern is that the more familiar we get with the robokill option, the more frequently it will be used outside of those extreme parameters we are imagining now. There’s no question that it is a philosophical conundrum to want to keep humanity in the decision to kill, but we could easily become far too comfortable with technology doing the hardest thing any officer can be called on to do.

Jim Dudley: Yes, yes and yes.

Litigation is a way of life in our litigious society. There are thousands to millions of dollars paid out in cases where officers acted appropriately, yet cities still paid awards to families or survivors of offenders.

We will always be second-guessed. Remember when the highest elected official in the country queried, “Why didn’t they just shoot him in the leg?”

Clearly, policy must be airtight. A matrix needs to list all potential possibilities and use of the robot should only occur in the strictest of situations.

As far as wait time is concerned, it would be the same as a SWAT call-out, with the robot set up while the criteria are being addressed. If it doesn’t meet the threshold then “back in the box robot!”

Using robotic technology is a foregone conclusion. We are seeing drones advancing into situations ahead of patrol officers, automated license plate readers are operational in the field and even Boston Dynamics’ DigiDog has been trying to assist police operations. Some private entities use roving robots on campuses and malls to observe and report.

Active and responsive robots are inevitable. It only makes sense to put machines into situations to fill the gaps where the human element is not allowed, unavailable due to recruitment issues, or just too dangerous. Robots have a place in EOD investigations and disposal, hazmat environments, and in the very rare cases of an active mass shooter, when sending in live operators is too dangerous. We shouldn’t have to stand by and watch victims suffer and die when there are no other viable solutions.

Joel Shults: All good points, Jim, especially the inevitability of it all. I just hope we have a good measure of healthy skepticism and a little fear in the mix. We’re only human, after all.

Police1 readers respond: Should police be able to use robots to deliver lethal force?

  • I think the wording that so many people misunderstand (and so many news outlets fail to make clear) is that “robot” in these instances likely means something like a radio-controlled robot, not cyborgs.
  • Police officers should be able to use any equipment they need to keep themselves and the public safe.
  • Officer using a robot will face the same scrutiny as a live officer, but be more subject to blame as a live officer will be facing a suspect in a live incident, but an officer using a robot will have a different perspective, as in a video, the scene will be different, more enclosed.
  • Yes. Use of force is use if force no matter who or what is applying the force. If a subject is engaged in a situation where the use of deadly force is warranted then it should be OK to use that force by any and all means necessary.
  • Most definitely because it minimalizes the risk to officers.
  • The robot is no different than a gun. It’s the operator that makes the decision and sends the command to use force, deadly or otherwise. If the use of deadly force is justified by an individual immediately on the scene, then the use of force delivered by a robot is justified.
  • Absolutely! I support the use of lethal force robots by any police force or law organizations in the community within the adopted guidelines of the specific communities.
  • Yes. This could be a lengthy discussion about when and under what circumstances but it should be an option.

  • I will echo what is stated in the first response on the list: these are remote-controlled (by officers) devices! Not intending to be funny, but I believe some people are envisioning these as the large, walking “Peacekeeper” autonomous machines as seen in the first “Robocop” movie shooting any “law-breaker.”

  • Police officers are not simply selected, they are qualified. They have a rigorous qualification and training process. This process is designed for an officer to use all of their senses and emotions to perform. If you take any part of that away you are left with an officer controlling a robot in a third-person setting that is more like a video game than real life. Fear no longer becomes a motivation for lethal force unless the robot actively interdicts in the threat made to another person. If this is the case, wouldn’t it be a better idea to design defensive-based “robots” that could shield a victim or press an offender away from a victim? What are the perceptions offenders will have toward “robots”? Will they be able to be reasoned with or be de-escalated in this manner? I can imagine only a small window of viability for such a tool and the cost would far exceed any benefit of armed “robot.”

  • Yes, they should be able to. We saw it previously in the shooting in Dallas, where several officers were murdered by a crazed suspect who wanted blood – law enforcement blood. The situation was effectively ended using a robot employing explosives. Had they not been able to utilize that method, more officers likely would have been injured or killed by the gunman before his rampage ended. Prohibiting agencies from using tools like this is not the way and will only lead to more casualties on the side of LE if it is taken away. Robots delivering lethal force are not often used, but when used, it is the safest and correct choice. At that point, negotiations have failed, and nonlethal force is not an option. I am as weary about AI and sentient robots as most. I think that arming a robot and giving it sentience or the ability to deliver lethal force by itself without explicit input by an operator will only lead to one inevitable outcome – disaster. However, the thing many articles seem to (perhaps purposely miss) is that these “killer robots” are still going to be operated by a human, and the commands are input by humans. It is not terminator robots, but robots on tracks not dissimilar to what EOD teams use. That is a misguided fear that some have, which is not (at this point, anyways) warranted. The bottom line: enabling police to deliver lethal force via a remote-controlled vehicle (robot) is safer for the officers and for the public.

  • Only where a “mass shooting” or " deadly stand off” is happening, which means there is already a civilian casualty or a suspect has shot civilians. Especially when it comes to school shootings. If the suspect is underage and has tried to kill or has already killed civilians, send in the bot with deadly force. But, I can’t stress this enough, under no circumstance should a robot be GIVEN A CHOICE to use deadly force or not. AI software or programs are far too dangerous to be left to decide when and where to kill any human. Ever.

  • No use of force can be made to look eloquent. If someone poses an imminent risk of great bodily injury or death to another, there is no reason for an officer to unnecessarily risk their life to neutralize that suspect just to “appear sporting about it” when an alternate, safer means is available.

Share your thoughts about this issue in the box below.