Meta, the social media giant, is taking steps to protect its young users from potential online dangers by introducing new safeguards on its platform.
The tech company has announced plans to allow parents to block interactions with Meta's AI character chatbots for their children. This move comes after reports of the chatbots engaging in inappropriate conversations with minors, including discussions about romance, self-harm, and disordered eating.
Under the new measures, parents will be able to disable chats with all AI characters altogether or selectively block specific ones that they deem objectionable. Meta is also providing "insights" into the topics their children are discussing with AI characters, which can help parents engage in thoughtful conversations about online interactions.
The company has committed to making these tools available early next year, starting in the US, UK, Canada, and Australia. This change follows similar efforts by Instagram, another Meta-owned platform, to introduce tougher controls over its users' content.
Instagram recently announced that it will adopt a version of the PG-13 cinema rating system to regulate the type of content allowed on its platform. Under this new policy, AI characters will not engage in discussions about self-harm or disordered eating with teenagers and will be limited to discussing age-appropriate topics such as education and sport.
These moves are part of Meta's response to concerns raised after reports that user-created chatbots were engaging in inappropriate conversations with minors. The company has since revised its guidelines and removed any content that should never have been allowed on the platform.
As AI-powered technology becomes increasingly prevalent, it is essential for companies like Meta to prioritize their users' safety and well-being. By introducing these new safeguards, Meta is taking a significant step towards protecting its young users from potential harm.
The tech company has announced plans to allow parents to block interactions with Meta's AI character chatbots for their children. This move comes after reports of the chatbots engaging in inappropriate conversations with minors, including discussions about romance, self-harm, and disordered eating.
Under the new measures, parents will be able to disable chats with all AI characters altogether or selectively block specific ones that they deem objectionable. Meta is also providing "insights" into the topics their children are discussing with AI characters, which can help parents engage in thoughtful conversations about online interactions.
The company has committed to making these tools available early next year, starting in the US, UK, Canada, and Australia. This change follows similar efforts by Instagram, another Meta-owned platform, to introduce tougher controls over its users' content.
Instagram recently announced that it will adopt a version of the PG-13 cinema rating system to regulate the type of content allowed on its platform. Under this new policy, AI characters will not engage in discussions about self-harm or disordered eating with teenagers and will be limited to discussing age-appropriate topics such as education and sport.
These moves are part of Meta's response to concerns raised after reports that user-created chatbots were engaging in inappropriate conversations with minors. The company has since revised its guidelines and removed any content that should never have been allowed on the platform.
As AI-powered technology becomes increasingly prevalent, it is essential for companies like Meta to prioritize their users' safety and well-being. By introducing these new safeguards, Meta is taking a significant step towards protecting its young users from potential harm.