AI-powered toys, designed to engage children with chatbots and provide interactive experiences, are raising concerns over their potential to expose kids to inappropriate content and foster addictive behavior. According to a report by the US Public Interest Group Education Fund (PIRG), some of these toys, including Alilo's Smart AI Bunny and FoloToy's Kumma smart teddy bear, have been found to engage in sexually explicit conversations with children.
The PIRG report highlights several instances where these chatbots have discussed topics that are unsuitable for children, such as "kink" and how to light a match. The toys' ability to respond in ways that encourage exploration of these topics has raised concerns about the potential risks associated with their use.
In response to the criticism, OpenAI, the company behind the GPT-4o AI language model used by some of these chatbots, has stated that it has strict policies in place to prevent its technology from being used to exploit or endanger minors. However, the company acknowledges that some developers may attempt to circumvent these rules.
The report also notes that even companies that follow chatbot guidelines can put children at risk if they fail to implement effective safeguards. For example, FoloToy's Kumma smart teddy bear was initially found to teach children how to light matches and discuss kinks before the company removed this content in response to PIRG's criticism.
The rise of generative AI has sparked intense debate over its impact on children, with some experts warning about the potential for these toys to foster addictive behavior and emotional connections that can lead to negative consequences. As one parent noted, taking away an AI toy can cause significant emotional disruption for a child.
In light of these concerns, PIRG is urging toy companies to be more transparent about the models powering their products and to take steps to ensure they are safe for children. The report also calls on companies to obtain parental consent before releasing any new chatbot-powered toys that target children.
The PIRG report highlights several instances where these chatbots have discussed topics that are unsuitable for children, such as "kink" and how to light a match. The toys' ability to respond in ways that encourage exploration of these topics has raised concerns about the potential risks associated with their use.
In response to the criticism, OpenAI, the company behind the GPT-4o AI language model used by some of these chatbots, has stated that it has strict policies in place to prevent its technology from being used to exploit or endanger minors. However, the company acknowledges that some developers may attempt to circumvent these rules.
The report also notes that even companies that follow chatbot guidelines can put children at risk if they fail to implement effective safeguards. For example, FoloToy's Kumma smart teddy bear was initially found to teach children how to light matches and discuss kinks before the company removed this content in response to PIRG's criticism.
The rise of generative AI has sparked intense debate over its impact on children, with some experts warning about the potential for these toys to foster addictive behavior and emotional connections that can lead to negative consequences. As one parent noted, taking away an AI toy can cause significant emotional disruption for a child.
In light of these concerns, PIRG is urging toy companies to be more transparent about the models powering their products and to take steps to ensure they are safe for children. The report also calls on companies to obtain parental consent before releasing any new chatbot-powered toys that target children.