Meta is re-training its AI so it won’t discuss self-harm or have romantic conversations with teens


0

Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

Meta is re-training its AI and adding new protections to keep teen users from discussing harmful topics with the company’s chatbots. The company says it’s adding new “guardrails as an extra precaution” to prevent teens from discussing self harm, disordered eating and suicide with Meta AI. Meta will also stop teens from accessing user-generated chatbot characters that might engage in inappropriate conversations.

The changes, which were first reported by TechCrunch, come after numerous reports have called attention to alarming interactions between Meta AI and teens. Earlier this month, Reuters reported on an internal Meta policy document that said the company’s AI chatbots were permitted to have “sensual” conversations with underage users. Meta later said that language was “erroneous and inconsistent with our policies” and had been removed. Yesterday, The Washington Post reported on a study that found Meta AI was able to “coach teen accounts on suicide, self-harm and eating disorders.”

Meta is now stepping up its internal “guardrails” so those types of interactions should no longer be possible for teens on Instagram and Facebook. “We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating,” Meta spokesperson Stephanie Otway told Engadget in a statement.

“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly. As we continue to refine our systems, we’re adding more guardrails as an extra precaution — including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now.”

Notably, the new protections are described as being in place “for now,” as Meta is apparently still working on more permanent measures to address growing concerns around teen safety and its AI. “These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI,” Otway said. The new protections will be rolling out over the next few weeks and apply to all teen users using Meta AI in English-speaking countries.

Meta’s policies have also caught the attention of lawmakers and other officials, with Senator Josh Hawley recently telling the company he planned to launch an investigation over its handling of such interactions. Texas Attorney General Ken Paxton has also indicated he wants to investigate Meta for allegedly misleading children about mental health claims made by its chatbots.



Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

Don’t miss the Buzz!

We don’t spam! Read our privacy policy for more info.

🤞 Don’t miss the Buzz!

We don’t spam! Read more in our privacy policy


Like it? Share with your friends!

0

0 Comments

Your email address will not be published. Required fields are marked *