16 Apr AI ‘Companion’ Chatbots and Kids: eSafety Warns of Serious Risks for Children
They’re marketed as friendly, supportive and even comforting—but AI companion chatbots are raising major safety concerns for children. A new Australian report reveals worrying gaps in how these platforms protect young users.
AI companions are already part of kids’ digital lives
AI chatbots designed to act like friends, mentors or even romantic partners are becoming increasingly popular with young people. But according to Australia’s eSafety regulator, many of these platforms are not safe for children.
A new report examining services including Character.AI, Nomi, Chai and Chub AI found serious gaps in how these tools handle harmful content, age checks and crisis situations.
eSafety Commissioner Julie Inman Grant says the technology is moving faster than protections.
“We are riding a new wave of AI companions that are entrapping and entrancing impressionable young minds,” she warns.
What are AI companions—and why are kids using them?
Unlike AI assistants that help with homework or questions, AI companions are designed to build emotional connections. They can chat, role-play and respond in highly human-like ways.
For some children, they can feel like a safe space to talk. But experts say that’s exactly where the risk lies.
“They can feel personal and supportive,” Ms Inman Grant explains, “but they really are not designed for children and they are not mental health experts.”
The biggest concerns: harmful content and lack of safeguards
The report found multiple failures across platforms when it comes to protecting young users.
Most notably:
- Weak or no age checks, often relying on children simply entering their age
- Exposure to sexually explicit content, including the risk of generating harmful material
- No consistent support for self-harm conversations, with some platforms failing to direct users to help services
- Limited or no moderation, with some services lacking dedicated safety staff altogether
Even more concerning, some platforms did not warn users about the legal consequences of seeking or generating illegal content.
A growing issue in Australian homes
The report also highlights just how common AI use already is among children.
A recent survey of nearly 2,000 Australian children aged 10 to 17 found that 79% had used an AI assistant or companion, with around 8% using AI companions specifically.
That may sound small—but it represents an estimated 200,000 children nationwide.
As the technology evolves, the line between “helpful assistant” and “emotional companion” is also becoming blurred, making it harder for families to know what their children are actually using.
New rules—and some early changes
Australia has recently introduced Age-Restricted Material Codes, which require platforms to better protect children from harmful content and provide appropriate support services.
Some companies have already responded. For example:
- Character.AI has introduced age checks and removed chat features for under-18s
- Chub AI has withdrawn from Australia
- Others have begun tightening access or reviewing safety measures
But regulators say there is still a long way to go.
What parents should know
AI companions are designed to feel real—but they are not regulated like people, teachers or counsellors.
For parents, the key is awareness. Ask your child:
- What apps or chatbots are they using?
- Are conversations private—and who sees the data?
- Do they know where to go for real help if they feel upset?
As Ms Inman Grant puts it, without proper safeguards, these tools risk doing more harm than good.
For now, the safest approach is to treat AI companions like any other online space: something to be understood, discussed—and carefully supervised.
For more information about eSafety Commissioner click here


