🤖 Grok’s a father? In the latest push to get AI into the hands of children, Elon Musk says his startup xAI is building a new app dedicated to child-friendly content — called Baby Grok. Musk offered no further details beyond a brief post on X saying “we’re going to make Baby Grok.”

The Grok controversy: The announcement comes just weeks after Musk’s company released Grok 4, a chatbot that quickly drew backlash for parroting antisemitic content and — more recently — for appearing to search X for Musk’s own opinions when asked controversial questions, according to ARS Technica. Researchers testing the model found it prioritized Musk’s posts when answering politically sensitive prompts — a feature xAI has since tried to patch.

Silicon Valley’s next target: Musk’s move adds to a growing — and controversial — trend of AI companies racing to make products for kids. Earlier in May, Google began rolling out its Gemini to children under 13 through its Family Link system, sparking backlash from advocates and safety experts. Parents must opt in, but they’re also required to hand over details like a child’s name and date of birth.

Should children be using generative AI at all? Google says Gemini can help kids with homework, but experts note that chatbots can just as easily confuse or manipulate young users who may not understand they’re interacting with a machine. Groups like UNICEF have warned that exposing children to advanced generative AI without clear guardrails poses serious developmental and psychological risks.

AI models can expose young users to misinformation, harmful stereotypes, and addictive interactions. In the US, companies like Meta and Google have faced scrutiny — and legal action — for collecting data on underage users and exposing them to inappropriate content.

We’ve been here before: In 2021, Meta scrapped plans for Instagram Kids after state attorneys general warned that the platform could be harmful to children. Meanwhile, tech giants like Google, Amazon, and Microsoft have paid mns in fines over violations of federal children’s privacy laws.

And the ambition keeps growing. Meta says AI should be more than a tool — it should be a friend. That vision includes AI-powered chatbots designed to offer companionship, emotional support, and even roleplay romantic relationships. Internal reports show Meta’s own staff have raised alarm over underage users encountering explicit content in these interactions.

The stakes are high: A study by OpenAI and MIT found that some users rely on chatbots like ChatGPT for emotional support, while other platforms like Character.AI have raised concerns about users forming unhealthy attachments to fictional characters or celebrity replicas. Researchers warn that risk is even greater for children.