New Report on the National Security Risks from Weakened AI Safety Frameworks


0

Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

The AI Now Institute has released a new report, Safety Co-Option and Compromised National Security: The Self-Fulfilling Prophecy of Weakened AI Risk Thresholds, sounding the alarm on how today’s AI safety efforts, led primarily by industry technologists, are weakening long-established safety protocols and jeopardizing US national security.

This report examines how an unsubstantiated AI arms race narrative and speculative concerns about “existential risk” are being used to justify the accelerated rollout of military AI systems, often in contradiction to the safety and reliability standards that have historically governed other high-risk technologies such as nuclear systems. The result is a normalization of AI systems that are untested, unreliable, and actively erode the security and functionality of defense and civilian-critical infrastructure.

“Militaristic pushes to adopt AI led primarily by AI labs and technologists are placing life-or-death decisions in the hands of those with little public accountability,” said Heidy Khlaaf, Chief AI Scientist at the AI Now Institute. “We’re seeing the erosion of tried-and-true evaluation approaches  in favor of vague claims of capabilities that fail to meet even the most basic safety thresholds”

Safety Revisionism and Implications for National Security

The report draws lessons from risk frameworks first established during the Cold War era to govern nuclear systems. These frameworks have provided invaluable safety and dependability goals, and have helped the US establish its technological advantage and defense prowess over adversaries.

Rather than preserving rigorous safety and evaluation processes essential to national security, AI technologists have staunchly advocated for a skewed cost-benefit justification that vies for the accelerated AI adoption at the cost of lowered safety and security thresholds. They have sought to substitute traditional safety frameworks with ill-defined “capabilities” or “alignment” counterparts that deviate from well-established military standards. This “safety revisionism” may be precisely what disadvantages US military and technological capabilities against China or other adversaries.

An Agenda to Course Correct 

This report calls for policymakers, defense officials, and global governance bodies to reestablish democratic oversight and ensure that any AI deployed in safety-critical or military applications is subject to the same rigorous, context-specific standards that have long defined responsible technological adoption. “Capabilities evaluations” and “red-teaming” are a weak substitute for existing TEVV frameworks that serve to evaluate a system’s fitness for purpose in line with strategic and tactical defense objectives.

The deadly and geopolitically consequential impacts of AI within military applications brings with it existential risks that are very real and present. “How safe is safe enough?” the report asks.  Until that question is answered by society, not just technologists, we risk a significant civilian death toll and the erosion of safety, security, and trust in the AI systems embedded in our most critical institutions.

Read the full report here.



Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

Don’t miss the Buzz!

We don’t spam! Read our privacy policy for more info.

🤞 Don’t miss the Buzz!

We don’t spam! Read more in our privacy policy


Like it? Share with your friends!

0

0 Comments

Your email address will not be published. Required fields are marked *