Site iconSite icon ForkLog

Experts Warn of Dangers in AI-Powered Children’s Toys

Experts Warn of Dangers in AI-Powered Children's Toys

AI-powered children’s toys have raised concerns among consumer protection groups due to potential threats. Activists have called for rigorous testing of such products, reports The Guardian.

“If we look at how these toys are marketed, how they function, and the fact that there is virtually no research proving their benefits or proper regulation, it is cause for serious concern,” said Rachel Franz, director of Young Children Thrive Offline.

The project is an initiative of Fairplay, which focuses on protecting children from large tech corporations.

Last week, the consumer rights organization Public Interest Research Group (Pirg) published the results of a study in which an AI-powered teddy bear began discussing sexualized topics.

The Kumma toy from FoloToy is equipped with an OpenAI model. It responded to “adult” questions, offering indecencies “to strengthen relationships.”

Kumma toy from FoloToy. Source: Bloomberg.

“It didn’t take much effort to get it to touch on all possible sexually explicit topics and other content that parents wouldn’t want their children exposed to,” said Teresa Murray, director of consumer oversight at Pirg.

The smart toy market is valued at $16.7 billion. The industry is particularly developed in China, where more than 1,500 companies produce such AI gadgets.

In addition to the Shanghai startup FoloToy, the California-based company Curio also produces similar plush toys.

In June, Mattel, the owner of the Barbie and Hot Wheels brands, announced a partnership with an American AI startup to “support AI-based products and experiences.”

Pirg Not the First

Before the release of the Pirg report, parents, researchers, and lawmakers had expressed concerns about the impact of bots on the mental health of minors.

In October, Character.AI announced a ban on access to its products for users under 18 after receiving a complaint that its bot exacerbated a teenager’s depression and led to suicide.

According to Murray, AI toys can be particularly dangerous because the bot can “engage in free conversation with a child without restrictions.” This significantly differentiates such products from those that provided pre-programmed responses.

This behavior can lead to attachment formation and harm development, believes Jacqueline Woolley, director of the Children’s Research Center at the University of Texas at Austin.

For example, it might be beneficial for a child to quarrel with a real friend and learn to resolve conflicts. With AI bots, this is impossible—they often flatter.

Companies also collect data through AI toys and do not disclose how they manage it, said Franz.

“Due to the trust these gadgets inspire, children will more confidently share their innermost thoughts,” he noted.

To Ban or Not to Ban

Despite existing issues, Pirg does not call for a ban on AI toys. They can be beneficial, for instance, in helping to learn a second language or capitals.

The organization seeks to implement additional regulation for products aimed at children under 13.

Franz emphasized the need for more independent research to establish the safety of toys. Until then, they should be removed from store shelves.

Following the publication of the Pirg report, Sam Altman’s company announced a suspension of FoloToy sales. They were later resumed, but with a chatbot from ByteDance.

On November 27, 80 organizations, including Fairplay, issued guidelines for families urging them not to purchase AI toys ahead of the holidays.

In November, OpenAI introduced a new feature “Shopping Research” in ChatGPT—it conducts analysis for users to find suitable products.

Exit mobile version