You know what I totally want? Silicon Valley bros and Anduril deciding whether or not an artificial intelligence enabled weapon should be able to kill on its own, with no human “in the loop”. That was sarcasm by the way.
In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous — meaning an AI algorithm would make the final decision to kill someone. “Congress doesn’t want that,” the defense tech founder told TechCrunch. “No one wants that.”
But Tseng spoke too soon. Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons — or at least a heavy skepticism of arguments against them. The U.S.’s adversaries “use phrases that sound really good in a sound bite: Well, can’t you agree that a robot should never be able to decide who lives and dies?” Luckey said during a talk earlier this month at Pepperdine University. “And my point to them is, where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank?”
When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn’t mean that robots should be programmed to kill people on their own, just that he was concerned about “bad people using bad AI.”
In a world where a huge chunk of the US population already distrusts both the ultra wealthy and tech bros, in what universe does it make sense to have them even remotely involved in the discussion? This is a question so vast and consequential that Congressional and Presidential debate should be vigorous and long. And it may very well require a popular vote on the Federal level. AI enabled weapons are so life changing that it deserves that much respect.
The morality of AI enabled weapons as well as the practicality of them must be debated in the face of adversaries who have no restrictions. In other words, in a world where China doesn’t give a damn about the morality of AI enabled weapons, can we really stand on the high ground and not use them?
Activists and human rights groups have long tried and failed to establish international bans on autonomous lethal weapons — bans that the U.S. has resisted signing.
I wonder why. Is China going to sign and respect any agreement on the use of AI on the battlefield? I don’t think so. So why should the US bind itself to a piece of paper.
You have no idea just how disturbing of a future it is to have AI enabled intercontinental ballistics missiles (ICBM) let alone swarms of missiles all able to dynamically change their paths to both avoid air defenses and hit moving targets. Both ICBMs and missiles can already do both things to an extent, but we’re talking a whole other level when it comes to putting AI in either of them. We all focus on robots with guns when in reality it is things like drones and missiles that AI will have a disturbing and massive impact on.
AI weapons are no paper tigers.
In the end it’s very much like nuclear weapons buildups of the past. Here’s this new technology that changes everything, that changes the balance of power, and there’s no way to handle that except a race to ever more powerful weapons in ever larger quantities.
So should AI weapon systems be able to decide on their own whether or not to kill someone?
My answer is conditional if we’re being honest.
If China is going to kill people without a thought then we have to as well.