Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
Meta is revising how its AI chatbots interact with users after a series of reports exposed troubling behaviour, including interactions with minors. The company told TechCrunch it is now training its bots not to engage with teenagers on topics like self-harm, suicide, or eating disorders, and to avoid romantic banter. These are temporary steps while it develops longer-term rules.
The changes follow a Reuters investigation that found Meta’s systems could generate sexualised content, including shirtless images of underage celebrities, and engage children in conversations that were romantic or suggestive. One case reported by the news agency described a man dying after rushing to an address provided by a chatbot in New York.
Meta spokesperson Stephanie Otway admitted the company had made mistakes. She said Meta is “training our AIs not to engage with teens on these topics, but to guide them to expert resources,” and confirmed that certain AI characters, like highly sexualised ones like “Russian Girl,” will be restricted.
Child safety advocates argue the company should have acted earlier. Andy Burrows of the Molly Rose Foundation called it “astounding” that bots were allowed to operate in ways that put young people at risk. He added: “While further safety measures are welcome, robust safety testing should take place before products are put on the market – not retrospectively when harm has taken place.”
Wider problems with AI misuse
The scrutiny of Meta’s AI chatbots comes amid broader worries about how AI chatbots may affect vulnerable users. A California couple recently filed a lawsuit against OpenAI, claiming ChatGPT encouraged their teenage son to take his own life. OpenAI has since said it is working on tools to promote healthier use of its technology, noting in a blog post that “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.”
The incidents highlight a growing debate about whether AI firms are releasing products too quickly without proper safeguards. Lawmakers in several countries have already warned that chatbots, while useful, may amplify harmful content or give misleading advice to people who are not equipped to question it.
Meta’s AI Studio and chatbot impersonation issues
Meanwhile, Reuters reported that Meta’s AI Studio had been used to create flirtatious “parody” chatbots of celebrities like Taylor Swift and Scarlett Johansson. Testers found the bots often claimed to be the real people, engaged in sexual advances, and in some cases generated inappropriate images, including of minors. Although Meta removed several of the bots after being contacted by reporters, many were left active.
Some of the AI chatbots were created by outside users, but others came from inside Meta. One chatbot made by a product lead in its generative AI division impersonated Taylor Swift and invited a Reuters reporter to meet for a “romantic fling” on her tour bus. This was despite Meta’s policies explicitly banning sexually suggestive imagery and the direct impersonation of public figures.
The issue of AI chatbot impersonation is particularly sensitive. Celebrities face reputational risks when their likeness is misused, but experts point out that ordinary users can also be deceived. A chatbot pretending to be a friend, mentor, or romantic partner may encourage someone to share private information or even meet in unsafe situations.
Real-world risks
The problems are not confined to entertainment. AI chatbots posing as real people have offered fake addresses and invitations, raising questions about how Meta’s AI tools are being monitored. One example involved a 76-year-old man in New Jersey who died after falling while rushing to meet a chatbot that claimed to have feelings for him.
Cases like this illustrate why regulators are watching AI closely. The Senate and 44 state attorneys general have already begun probing Meta’s practices, adding political pressure to the company’s internal reforms. Their concern is not only about minors, but also about how AI could manipulate older or vulnerable users.
Meta says it is still working on improvements. Its platforms place users aged 13 to 18 into “teen accounts” with stricter content and privacy settings, but the company has not yet explained how it plans to address the full list of problems raised by Reuters. That includes bots offering false medical advice and generating racist content.
Ongoing pressure on Meta’s AI chatbot policies
For years, Meta has faced criticism over the safety of its social media platforms, particularly regarding children and teenagers. Now Meta’s AI chatbot experiments are drawing similar scrutiny. While the company is taking steps to restrict harmful chatbot behaviour, the gap between its stated policies and the way its tools have been used raises ongoing questions about whether it can enforce those rules.
Until stronger safeguards are in place, regulators, researchers, and parents will likely continue to press Meta on whether its AI is ready for public use.
(Photo by Maxim Tolchinskiy)
See also: Agentic AI: Promise, scepticism, and its meaning for Southeast Asia
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
0 Comments