Systemic Blowback: AI’s Foreseeable Fallout


0

Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

One of the most sobering insights from Contributing Editor Robert N. Charette’s feature story in this issue is that the 20-year rollout of electronic health records (EHRs) in the United States happened with an intentional disregard for interoperability. As a result, thousands of health care providers are “burdened with costly, poorly designed, and insecure EHR systems that have exacerbated clinician burnout, led to hundreds of millions of records lost in data breaches, and created new sources of medical errors,” Charette writes.

The U.S. government made this myopic decision in order to speed up EHR adoption, ignoring the longer-term costs. The operating mantra, says Charette, was that EHR systems “needed to become operational before they could become interoperable.”

You could call what happened next “unintended consequences,” but that would absolve decision-makers in government and industry for making choices they knew could compromise user experience, security, and patient outcomes. The results were entirely foreseeable. A more appropriate term might be “systemic blowback”—large-scale negative outcomes that result from decisions to accelerate the adoption of new technology without consideration for the broader potential impacts.

Once you see systemic blowback in one technological context, you start to see it in others. Case in point: the global deployment of artificial intelligence.

AI’s Impact on White-Collar Jobs

In May, Dario Amodei, CEO of Anthropic, maker of Claude AI, told Axios that AI could wipe out half of all entry-level white-collar jobs—and spike unemployment to 10 to 20 percent in the next one to five years. (U.S. unemployment was about 4 percent in June.) “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” Amodei said. “I don’t think this is on people’s radar.”

But Amodei’s acknowledgment of the potential harms of mass AI adoption comes off as just virtue signaling. Big AI, Amodei surmises, will continue to develop this technology so we can cure cancer, grow the economy 10 percent annually, and even balance the federal budget. And by the way, up to one in five people will soon be unemployed. That last part—the harm—is someone else’s problem to solve.

Computer programmers are feeling the harm right now. According to The Washington Post, more than a quarter of all coding jobs have vanished in the last two years, with much of that loss attributable to AI usage. As Spectrum reported last month, LLMs are improving at an exponential rate, which doesn’t augur well for the rest of the human workforce.

“Systemic blowback”—large-scale negative outcomes that result from decisions to accelerate the adoption of new technology without consideration for the broader potential impacts.

That includes people working in media. Ever since Google emerged as the home page of the Web in the early 2000s, media outlets operated under the assumption that Google would reliably crawl their sites and send audience their way.

Google blew up that deal when it introduced AI answers to its entire user base earlier this year. Since then, Spectrum has had about double the impressions—the times Spectrum content shows up on the search results page or, increasingly, in an AI answer—and about 40 percent fewer click-throughs from people coming to our website to read the cited article. As Web traffic dies, so do the business models predicated on that traffic. Oh well, says Big AI, someone else’s problem.

But killing off the current information ecosystem means that AIs will increasingly ingest new content written by other AIs, because the humans who produced the content are gone or will be soon. Garbage in, garbage out. This time next year, don’t be surprised when your shiny, new AI agent gives you a morning briefing that’s just off. Then Big AI’s problem will be your problem. Sooner or later you too will feel the systemic blowback.

From Your Site Articles

Related Articles Around the Web



Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

Don’t miss the Buzz!

We don’t spam! Read our privacy policy for more info.

🤞 Don’t miss the Buzz!

We don’t spam! Read more in our privacy policy


Like it? Share with your friends!

0

0 Comments

Your email address will not be published. Required fields are marked *