Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
I was sick last week, so I did not have time to write about the Discover Tab in Meta’s AI app, which, as Katie Notopoulos of Business Insider has pointed out, is the “saddest place on the internet.” Many very good articles have already been written about it, and yet, I cannot allow its existence to go unremarked upon in the pages of 404 Media.
If you somehow missed this while millions of people were protesting in the streets, state politicians were being assassinated, war was breaking out between Israel and Iran, the military was deployed to the streets of Los Angeles, and a Coinbase-sponsored military parade rolled past dozens of passersby in Washington, D.C., here is what the “Discover” tab is: The Meta AI app, which is the company’s competitor to the ChatGPT app, is posting users’ conversations on a public “Discover” page where anyone can see the things that users are asking Meta’s chatbot to make for them.

This includes various innocuous image and video generations that have become completely inescapable on all of Meta’s platforms (things like “egg with one eye made of black and gold,” “adorable Maltese dog becomes a heroic lifeguard,” “one second for God to step into your mind”), but it also includes entire chatbot conversations where users are seemingly unknowingly leaking a mix of embarrassing, personal, and sensitive details about their lives onto a public platform owned by Mark Zuckerberg. In almost all cases, I was able to trivially tie these chats to actual, real people because the app uses your Instagram or Facebook account as your login.

In several minutes last week, I saved a series of these chats into a Slack channel I created and called “insanemetaAI.” These included:
- entire conversations about “my current medical condition,” which I could tie back to a real human being with one click
- details about someone’s life insurance plan
- “At a point in time with cerebral palsy, do you start to lose the use of your legs cause that’s what it’s feeling like so that’s what I’m worried about”
- details about a situationship gone wrong after a woman did not like a gift
- an older disabled man wondering whether he could find and “afford” a young wife in Medellin, Colombia on his salary (“I’m at the stage in my life where I want to find a young woman to care for me and cook for me. I just want to relax. I’m disabled and need a wheelchair, I am severely overweight and suffer from fibromyalgia and asthma. I’m 5’9 280lb but I think a good young woman who keeps me company could help me lose the weight.”)
- “What counties [sic] do younger women like older white men? I need details. I am 66 and single. I’m from Iowa and am open to moving to a new country if I can find a younger woman.”
- “My boyfriend tells me to not be so sensitive, does that affect him being a feminist?”
Rachel Tobac, CEO of Social Proof Security, compiled a series of chats she saw on the platform and messaged them to me. These are even crazier and include people asking “What cream or ointment can be used to soothe a bad scarring reaction on scrotum sack caused by shaving razor,” “create a letter pleading judge bowser to not sentence me to death over the murder of two people” (possibly a joke?), someone asking if their sister, a vice president at a company that “has not paid its corporate taxes in 12 years,” could be liable for that, audio of a person talking about how they are homeless, and someone asking for help with their cancer diagnosis, someone discussing being newly sexually interested in trans people, etc.
Tobac gave me a list of the types of things she’s seen people posting in the Discover feed, including people’s exact medical issues, discussions of crimes they had committed, their home addresses, talking to the bot about extramarital affairs, etc.


“When a tool doesn’t work the way a person expects, there can be massive personal security consequences,” Tobac told me.
“Meta AI should pause the public Discover feed,” she added. “Their users clearly don’t understand that their AI chat bot prompts about their murder, cancer diagnosis, personal health issues, etc have been made public. [Meta should have] ensured all AI chat bot prompts are private by default, with no option to accidentally share to a social media feed. Don’t wait for users to accidentally post their secrets publicly. Notice that humans interact with AI chatbots with an expectation of privacy, and meet them where they are at. Alert users who have posted their prompts publicly and that their prompts have been removed for them from the feed to protect their privacy.”


Since several journalists wrote about this issue, Meta has made it clearer to users when interactions with its bot will be shared to the Discover tab. Notopoulos reported Monday that Meta seemed to no longer be sharing text chats to the Discover tab. When I looked for prompts Monday afternoon, the vast majority were for images. But the text prompts were back Tuesday morning, including a full audio conversation of a woman asking the bot what the statute of limitations are for a woman to press charges for domestic abuse in the state of Indiana, which had taken place two minutes before it was shown to me. I was also shown six straight text prompts of people asking questions about the movie franchise John Wick, a chat about “exploring historical inconsistencies surrounding the Holocaust,” and someone asking for advice on “anesthesia for obstetric procedures.”
I was also, Tuesday morning, fed a lengthy chat where an identifiable person explained that they are depressed: “just life hitting me all the wrong ways daily.” The person then left a comment on the post “Was this posted somewhere because I would be horrified? Yikes?”
Several of the chats I saw and mentioned in this article are now private, but most of them are not. I can imagine few things on the internet that would be more invasive than this, but only if I try hard. This is like Google publishing your search history publicly, or randomly taking some of the emails you send and publishing them in a feed to help inspire other people on what types of emails they too could send. It is like Pornhub turning your searches or watch history into a public feed that could be trivially tied to your actual identity. Mistake or not, feature or not (and it’s not clear what this actually is), it is crazy that Meta did this; I still cannot actually believe it.
In an industry full of grifters and companies hell-bent on making the internet worse, it is hard to think of a more impactful, worse actor than Meta, whose platforms have been fully overrun with viral AI slop, AI-powered disinformation, AI scams, AI nudify apps, and AI influencers and whose impact is outsized because billions of people still use its products as their main entry point to the internet. Meta has shown essentially zero interest in moderating AI slop and spam and as we have reported many times, literally funds it, sees it as critical to its business model, and believes that in the future we will all have AI friends on its platforms. While reporting on the company, it has been hard to imagine what rock bottom will be, because Meta keeps innovating bizarre and previously unimaginable ways to destroy confidence in social media, invade people’s privacy, and generally fuck up its platforms and the internet more broadly.
If I twist myself into a pretzel, I can rationalize why Meta launched this feature, and what its idea for doing so is. Presented with an empty text box that says “Ask Meta AI,” people do not know what to do with it, what to type, or what to do with AI more broadly, and so Meta is attempting to model that behavior for people and is willing to sell out its users’ private thoughts to do so. I did not have “Meta will leak people’s sad little chats with robots to the entire internet” on my 2025 bingo card, but clearly I should have.
Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
0 Comments