Experts Warn That AI Is Getting Control of Nuclear Weapons


0

Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

“It’s going to find its way into everything.”

Nobel laureates met with nuclear experts last month to discuss AI and the end of the world — and if that sounds like the opening to a sci-fi blockbuster set in the apocalypse, you’re not alone.

As Wired reports, the convened experts seemed to broadly agree that it’s only a matter of time until an AI will get hold of nuclear codes. Exactly why that needs to be true is hard to pin down, but the feeling of inevitability — and anxiety — is palpable in the magazine’s reporting.

“It’s like electricity,” retired US Air Force major general and member of the Bulletin of the Atomic Scientists’ Science and Security Board, Bob Latiff, told Wired. “It’s going to find its way into everything.”

It’s a bizarre situation. AIs have already been shown to exhibit numerous dark streaks, resorting to blackmailing human users at an astonishing rate when threatened with being shut down.

In the context of an AI, or networks of AIs, safeguarding a nuclear weapons stockpile, those sorts of poorly-understood risks become immense. And that’s without getting into a genuine concern among some experts, which also happens to be the plot of the movie “The Terminator”: a hypothetical superhuman AI going rogue and turning humanity’s nuclear weapons against it.

Earlier this year, former Google CEO Eric Schmidt warned that a human-level AI may not be incentivized to “listen to us anymore,” arguing that “people do not understand what happens when you have intelligence at this level.”

That kind of AI doomerism has been on the minds of tech leaders for many years now, as reality plays a slow-motion game of catch-up. In their current form, the risks would probably be more banal, since the best AI models today still suffer from rampant hallucinations that greatly undercut the usefulness of their outputs.

Then there’s the threat of flawed AI tech leaving gaps in our cybersecurity, allowing adversaries — or even adversary AIs — to access systems in control of nuclear weapons.

To get all members of last month’s unusual meeting to agree on a topic as fraught as AI proved challenging, with Federation of American Scientists director of global risk Jon Wolfsthal admitting to the publication that “nobody really knows what AI is.”

They did find at least some common ground, at least.

“In this realm, almost everybody says we want effective human control over nuclear weapon decisionmaking,” Wolfsthal added. Latiff agreed that “you need to be able to assure the people for whom you work there’s somebody responsible.”

If this all sounds like a bit of a clown show, you’re not wrong. Under president Donald Trump, the federal government has been busy jamming AI into every possible domain, often while experts warn them the tech is not yet — and may never be — up to the task. Hammering the bravado home, the Department of Energy declared this year that AI is the “next Manhattan Project,” referencing the World War 2-era project that resulted in the world’s first nuclear bombs.

Underscoring the seriousness of the threat, ChatGPT maker OpenAI also struck a deal with the US National Laboratories earlier this year to use its AI for nuclear weapon security.

Last year, Air Force general Anthony Cotton, who’s effectively in charge of the US stockpile of nuclear missiles, boasted at a defense conference that the Pentagon is doubling down on AI, arguing that it will “enhance our decision-making capabilities.”

Fortunately, Cotton stopped short of declaring that we must let the tech assume full control.

“But we must never allow artificial intelligence to make those decisions for us,” he added at the time.

More on AI and nuclear weapons: OpenAI Strikes Deal With US Government to Use Its AI for Nuclear Weapon Security



Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

Don’t miss the Buzz!

We don’t spam! Read our privacy policy for more info.

🤞 Don’t miss the Buzz!

We don’t spam! Read more in our privacy policy


Like it? Share with your friends!

0

0 Comments

Your email address will not be published. Required fields are marked *