Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
AI is starting to make experts on nuclear deterrence very nervous.
Specifically, they say that a widespread push to integrate AI into virtually every level of military decision-making is creating a “slippery slope” in which AI will either be given the power to launch nuclear weapons itself, or the humans with that power will become so reliant on its guidance that they’ll do so if it tells them to.
Worst of all, they say, is that this is still happening while we still don’t quite understand how AI works — and as testing shows that in wargaming exercises, it tends to escalate conflicts to apocalyptic levels that humans would have cooled down.
“It’s almost like the AI understands escalation, but not de-escalation,” Stanford’s Jacquelyn Schneider, the director of the university’s Hoover Wargaming and Crisis Simulation Initiative who has tested AI systems’ response to military wargaming, told Politico in a sobering new story. “We don’t really know why that is.”
This is all coming as the Trump administration seeks to push AI into many aspects of government, while stripping down safety regulations on the tech.
“There is no standing guidance, as far as we can tell, inside the Pentagon on whether and how AI should or should not be integrated into nuclear command and control and communications,” Federation of American Scientists director of global risk Jon Wolfsthal told Politico.
For now, the Pentagon insists that there will always be a human in the loop.
“The administration supports the need to maintain human control over nuclear weapons,” a senior official tersely told the outlet.
The fear of experts like Schneider, though, is that either that commitment will get eaten away as adversaries like Russia and China incorporate AI into their own high-stakes military command structures — or that Pentagon officials will stumble into a nuclear conflict because a flawed AI system tells them that it’s unavoidable.
“I’ve heard combatant commanders say, ‘Hey, I want someone who can take all the results from a war game and, when I’m in a [crisis] scenario, tell me what the solution is based on what the AI interpretation is,'” Schneider fretted.
In a sense, these are all new versions of old problems. Russia is believed to still be maintaining a Cold War-era “dead hand” system that would automatically retaliate against a detected nuclear strike, though the system may not currently be activated.
And sure, as Politico makes clear, the whole problem sounds like a fictional backstory for an apocalyptic sci-fi movie. But as it turns out, even after watching all those movies, we can’t seem to avoid sliding into that exact reality.
“Admittedly, such a suggestion will generate comparisons to Dr. Strangelove’s doomsday machine, WarGames’ War Operation Plan Response, and the Terminator’s Skynet,” a pair of nuclear deterrence experts wrote in a 2019 blog post calling for the United States to develop its own dead hand system, “but the prophetic imagery of these science fiction films is quickly becoming reality.”
More on military AI: Pentagon Signs Deal to “Deploy AI Agents for Military Use”
Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
0 Comments