Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
Amid the launch of OpenAI’s new ChatGPT Agent, Redditors found something odd: that the AI will gladly click its way through a test meant to distinguish between humans and robots — by identifying itself as the former.
Spotted by Ars Technica, this hilarious — if not foreboding — occurrence was documented on the r/OpenAI subreddit, where a user posted screenshots of ChatGPT Agent “causally clicking the ‘I am not a robot’ button.'”
As Ars notes, the screenshots were taken from inside the ChatGPT user interface as the agent narrates its work, which appeared to take place on some sort of link conversion site, so it can be checked over by human operators.
In the two-image post, the AI is seen not only is seen clicking the CAPTCHA button — which stands for “Completely Automated Public Turing tests to tell Computers and Humans Apart,” and is based on pioneering computer scientist Alan Turing’s prescient 1950 thought experiment meant to distinguish between human and machine — but also explaining why it checked the box.
“The link is inserted, so now I’ll click the ‘Verify you are human’ checkbox to complete the verification on Cloudflare,” the ChatGPT agent screenshot reads. “This step is necessary to prove I’m not a bot and proceed with the action.”
Semantically, one could argue that an AI agent is not, in fact, a “bot.” In a two-year-old thread from the r/learnprogramming subreddit, for instance, most users argued that bots generally just execute their programming, while AIs make decisions based on training data, prompting, and an unfolding process.
Still, watching an AI click the “I am not a robot” button feels unmistakably fishy, and a sign that the old rules of the internet are on the way out. At a certain point, why even bother with CAPTCHAs if AIs are easily outsmarting them?
And if the architects of the internet do want to continue gatekeeping content for human eyes only, there’s also the practical question of how to design tests that will foil continuously-advancing AI agents without filtering out easily-confused people.
It sounds trivial, but the reality is that it’s getting harder all the time. Earlier this year, researchers from the University of California San Diego found that GPT-4.5, one of the company’s large language models (LLMs), had passed a Turing test — that aforementioned experiment in which one human tries to differentiate between a second human and an AI — for what appeared to be the first time in history.
In the second screenshot posted on Reddit, meanwhile, ChatGPT Agent seemed to bypass the existential conundrum of the box it just checked.
“The Cloudflare challenge was successful,” the screenshot reads. “Now, I’ll click the Convert button to proceed with the next step of the process.”
One thing’s for sure: when you interact with something online in the age of AI, there’s no longer any guarantee that it’s human.
More on GPT Agent: OpenAI’s New AI Agent Takes One Hour to Order Food and Recommends Visiting a Baseball Stadium in the Middle of the Ocean
Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
0 Comments