
Nov. 20 (UPI) — Toys that use AI to interact with children might seem like a fun idea, but one organization is warning against them.
The nonprofit Fairplay released an advisory Thursday warning parents to avoid artificial intelligence-based children’s toys this holiday season.
AI toys are chatbots embedded in children’s toys — such as plushies, dolls, action figures, or kids’ robots — and use AI technology designed to communicate like a friend.
Examples include Miko, Curio Interactive’s Grok and Gabbo, Smart Teddy, FoloToy’s Kumma bear, Roybi and Keyi Technology’s Loona Robot Dog. Some of the toys are marketed to children as young as infants, Fairplay said in a statement.
“It’s ridiculous to expect young children to avoid potential harm here,” said Rachel Franz, a Fairplay program director, in a statement to NPR.
“Young children are especially susceptible to the potential harms of these toys, such as invading their privacy, collecting data, engendering false trust and friendship, and displacing what they need to thrive, like human-to-human interactions and time to play with all their senses. These can have long and short-term impacts on development,” she said
Singapore-based FoloToy suspended sales of its Kumma bear after it was found to give inappropriate advice to children, CNN reported Wednesday. The bear’s chatbot talked about sexual fetishes, how to find knives in the home and how to light a match.
FoloToy CEO Larry Wang told CNN that the company had withdrawn Kumma and its other AI toys and is now “conducting an internal safety audit.”
The Toy Association, which represents toy manufacturers, told NPR that toys sold by responsible manufacturers and retailers must follow more than 100 strict federal safety standards and tests, including the Children’s Online Privacy Protection Act, which governs children’s privacy and data security online.
“The Toy Association urges parents and caregivers to shop only from reputable toymakers, brands, and retailers who prioritize children’s safety above all else,” the statement said. The organization added that it offers safety tips for AI and other connected products.
Fairplay offered more reasons that AI toys are not safe for children.
AI toys are usually powered by the same AI that has already harmed children, and young children who use them are less equipped to protect themselves than older children and teens, Fairplay said.
AI chatbots have caused children to use them obsessively, engaged in explicit sexual conversations, and encouraged unsafe behaviors, violence against others, and self-harm.
AI toys may sabotage children’s trust by pretending to be trustworthy companions or “friends.” Young children are likely to treat connected toys and devices as if they were people and develop an emotional attachment to them.
These “relationships” can disrupt children’s real relationships and resilience by offering “genuine friendship,” which isn’t possible from a machine.
Probably most concerning is that AI toys can invade family privacy by collecting sensitive data using audio and video recording, speech-to-text technology, and even voice, gesture, and facial recognition software, Fairplay said.
A child might talk to the toy and tell it their personal thoughts, emotions, fears, and desires, which will be delivered to a third party. They could also record private family conversations or record other children in the room.
Some toys even have facial recognition and video recording, which could take video of children in the bath or getting dressed.