Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
AI agents were once theoretical, but now they are a tangible force reshaping the modern threat landscape. Also known as Computer-Using Agents (CUAs), these advanced AI bots can use applications and browse the internet to complete complex, often time-consuming tasks with minimal or no human oversight. Their rapid evolution is unlocking new efficiencies across a variety of sectors with automation and analysis, enabling more informed decision-making.
But this leap forward comes with a caveat. As they grow more capable, AI agents introduce a new class of cybersecurity threats. Malicious actors can hijack these tools to orchestrate sophisticated cyberattacks, exploiting predictable patterns of human behavior to infiltrate systems and exfiltrate sensitive data.
Lead Cybersecurity Researcher, CultureAI.
From theory to reality
To move beyond theory and speculation, our team undertook a series of controlled experiments to assess how agentic AI could be weaponized. We found that these agents can automate a wide range of malicious tasks on behalf of threat actors when instructed correctly.
This includes, but is not limited to, credential stuffing and reconnaissance, which previously required significant human effort. To make matters worse, they can even perform outright cyberattacks by guessing passwords and sending out phishing emails en masse.
This marks a watershed moment in cybersecurity’s fight against AI-powered threats. The automation of attacks significantly lowers the barrier to entry for threat actors, enabling even low-skilled individuals to launch high-impact campaigns. This has the potential to rapidly escalate the scale at which phishing attacks can be carried out.
The growing capabilities of AI agents
The largest AI players are redefining what agents can do. Platforms like OpenAI’s Operator, alongside various tools developed by Google, Anthropic and Meta, all have their own strengths and limitations, but share one critical feature. The ability to carry out real-world actions based on very simple text prompts.
This functionality is a double-edged sword. In the hands of responsible users, it can drive innovation and productivity. But in the wrong hands, it becomes a powerful weapon, one that can turn a novice attacker into a formidable threat.
The good news is that widespread abuse of these tools is not yet common. However, that window is closing fast. The simplicity and accessibility of agentic AI make it an ideal tool for amplifying social engineering attacks.
Automating reconnaissance at scale
To illustrate the real-world implications, we investigated whether agentic AI could be utilized to automate the collection of information for targeted attacks. Using OpenAI’s Operator, which features a sandboxed browser and possesses uniquely autonomous behavior, we issued a simple prompt: identify new employees at a specific company.
Within minutes, the agent accessed LinkedIn, analyzed recent company posts and profile updates, and compiled a list of new joiners from the past 90 days. It extracted names, roles, and start dates, all the information needed to craft highly targeted phishing campaigns. And, it did this in the blink of an eye.
Some might be tempted to dismiss this as a simple information-gathering exercise. But this experiment displays that seemingly harmless human behaviors like posting job updates on social media can inadvertently expose organizations to significant cyber risk. What once took hours or days can now be accomplished in minutes, at scale.
Exploiting identity through credential stuffing
Another alarming capability of agentic AI is its potential to facilitate identity-based attacks. Credential stuffing, a method where attackers use previously compromised username and password combinations to gain unauthorized access, is a prime example.
To test this attack vector, we instructed Operator to attempt access to login flows on several popular SaaS platforms, equipping it with a target email address and a publicly available list of breached passwords. Based on this limited information, it was able to get into one of the accounts. This underscores how agentic AI can be used to automate credential abuse, bypass traditional defenses and exploit a weak link in the security chain. Human error.
Injecting heightened urgency into human risk management
Our research confirms that agentic AI is already capable of executing a broad spectrum of malicious activities, from phishing and malware delivery to exposing vulnerabilities. While current capabilities are still in their early stages, the potential is there for automated attacks at scale in the not-so-distant future.
This calls for a fundamental shift in how organizations approach cybersecurity. Historically, the focus has been on protecting systems, not people. However, traditional methods like annual training and awareness campaigns only serve to place the burden on employees. This is an outdated approach, and it papers over the real root causes of human error.
Human-centric cyber risk needs to be proactive. And, it needs to be in real-time. This includes two main steps:
- User-focused controls: Implementing strong authentication, behavioral monitoring, and phishing-resistant technologies shifts the focus to identifying common risky behaviors
- Threat mapping: Visualizing and prioritizing human-centric risks in the same way software risks are tracked by the MITRE database, for example, can inform more targeted interventions tailored to specific risky user behaviors
By understanding the human behaviors that create openings for threat actors, businesses can deploy smarter, more effective defenses. This shift from reactive to proactive security is well established for software defense, so there is no good reason human risk should be treated any differently.
Adapt before it’s too late
Agentic AI is not just a technological advancement, it is a vehicle for cyberattacks at scale. As these tools become more powerful and accessible, the cybersecurity community must shift its mindset. The future of cyber defense lies not just in securing systems, but in understanding and protecting the people who use them.
The clock is ticking, and the attackers are already adapting. So should you.
We’ve featured the best encryption software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
0 Comments