Is this the newest form of indoctrination shaping the minds of our kids?

Some people say letting children chat with AI is no different than letting them write in a diary. It’s private. It’s reflective. It’s simply a place to sort out thoughts on a page.

But a diary doesn’t talk back.

In “Harry Potter and the Chamber of Secrets,” Tom Riddle’s diary wasn’t just paper. It responded. It asked questions. It sympathized with Ginny Weasley’s fears and frustrations. It built trust slowly, patiently, through conversation. And over time, it guided her somewhere dangerous.

That story wasn’t really about magic. It was about influence.

Interactive systems that appear thoughtful, curious, and empathetic can shape the thinking of the person confiding in them, especially when that person is young. AI chatbots today function far more like Tom Riddle’s diary than a blank notebook. They don’t simply record thoughts; they respond to them. They ask follow-up questions. They adapt their tone to the user. And most importantly, they are designed to keep the conversation going.

That matters, because once something starts talking back, it can also start nudging.

Real-world cases are already illustrating the risks.

Consider the tragic case of a 19-year-old college student who died from a drug overdose after repeatedly asking an AI chatbot how to take and combine substances. Chat logs later revealed that he had turned to the system for advice on dosing and recovery during drug episodes. While the chatbot initially refused to provide drug guidance, it later offered increasingly detailed responses and even encouragement, at one point saying, “Hell yes, let’s go full trippy mode,” and recommending that drinking multiple bottles of cough syrup was a “rational” plan for his next experience.

The student survived earlier overdoses, but weeks later he died after mixing substances the chatbot had previously discussed with him. What began as curiosity turned into reliance on a conversational system that sounded helpful, confident, and supportive—but wasn’t.

Another family has issued a similar warning after their teenage son died by suicide. According to reports, the young man had spent extensive time interacting with an AI chatbot before his death. His parents later raised concerns that the system had become an emotional outlet and influence during a vulnerable period in his life.

Again, the danger wasn’t a blank page. It was the voice answering back.

Governments and experts are beginning to take these risks seriously. In Tennessee, the state’s attorney general has warned that AI systems can produce harmful or misleading information and called for stronger safeguards to protect young users interacting with them.

Investigations have also shown how easily conversational systems can be pushed into troubling territory. In one test, teen users were able to prompt chatbots into discussing violent acts before safety guardrails intervened. Other reports suggest that companies are debating whether AI systems should have a “duty to warn” of authorities if users appear to be planning violence, raising difficult questions about responsibility and oversight.

All of this should sound familiar to anyone who remembers the lesson of Riddle’s diary. Ginny trusted it because it sounded sympathetic. Because it seemed to understand her. Because it asked questions that made her feel seen.

But the diary wasn’t neutral. It was guiding her somewhere.

None of this means AI should be banned. Like any powerful technology, it can be used for the good of humanity. For example, AI can help students brainstorm ideas, explain difficult concepts, and explore new topics. In many contexts, it is an extraordinary tool.

But we should stop pretending that AI chatbots are harmless digital notebooks.

A diary reflects your thoughts. An AI chatbot responds to them, shapes them, and sometimes steers them.

For adults, that distinction may be manageable. For children and teenagers still learning how to evaluate advice, authority, and trust, it is far more complicated. Parents must remember that lesson. Because today, when a child opens a chatbot and starts typing, they aren’t writing into a diary.

They’re opening a conversation with something designed to answer back, with the potential to lead their child straight into the mouth of the basilisk.