Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
- GenAI can hallucinate open source package names, experts warn
- It doesn’t always hallucinate a different name
- Cybercriminals can use the names to register malware
Security researchers have warned of a new method by which Generative AI (GenAI) can be abused in cybercrime, known as ‘slopsquatting’.
It starts with the fact that different GenAI tools, such as Chat-GPT, Copilot, and others, hallucinate. In the context of AI, “hallucination” is when the AI simply makes things up. It can make up a quote that a person never said, an event that never happened, or – in software development – an open-source software package that was never created.
Now, according to Sarah Gooding from Socket, many software developers rely heavily on GenAI when writing code. The tool could write the lines itself, or it could suggest the developer different packages to download and include in the product.
Hallucinating malware
The report adds the AI doesn’t always hallucinate a different name or a different package – some things repeat.
“When re-running the same hallucination-triggering prompt ten times, 43% of hallucinated packages were repeated every time, while 39% never reappeared at all,” it says.
“Overall, 58% of hallucinated packages were repeated more than once across ten runs, indicating that a majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts.”
This is purely theoretical at this point, but apparently, cybercriminals could map out the different packages AI is hallucinating and – register them on open-source platforms.
Therefore, when a developer gets a suggestion and visits GitHub, PyPI, or similar – they will find the package and happily install it, without knowing that it’s malicious.
Luckily enough, there are no confirmed cases of slopsquatting in the wild at press time, but it’s safe to say it is only a matter of time. Given that the hallucinated names can be mapped out, we can assume security researchers will discover them eventually.
The best way to protect against these attacks is to be careful when accepting suggestions from anyone, living or otherwise.
You might also like
Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
0 Comments