EditorialWhat Students Might Not be Telling Us About AI Friends
Certain social media giants seem to be marketing generative AI chatbots as a form of digital companionship, or at least positioning such technology where it can easily be used this way. While some young people may benefit from using this technology to fill voids in their lives or provide worthwhile advice in delicate situations, the potential here for issues on many levels cannot be ignored.
Previously, plagiarism and the threats that AI brings to young peoples' capacity to think for themselves have dominated education discourse. For many, these remain high on the agenda but indicators point to another, potentially far greater concern on the horizon.
Visible vs. Invisible Uses of AI
Where communication is open about AI use with teachers or with parents, students are largely being asked about how they use AI in assessments, in learning, about their fears and the future. Ask any educator about student AI use and they'll likely focus on pragmatic applications. Maybe this clean, educational narrative about how AI as just another learning resource is precisely what schools want to hear. It fits neatly into existing academic frameworks.
But we've been here before. Remember when teenagers claimed they only used social media to keep in touch with friends while parents and teachers remained oblivious to the complex social hierarchies, cyberbullying and identity experimentation that went on in these groups? The history of online technology is filled with examples where what adults are told does not correlate with what actually happens.
Think Snapchat being explained to parents as "just sharing photos with friends" while its disappearing message feature was being used to exchange content students didn't want discovered. Or how gaming platforms such as Discord and Roblox were presented as innocent entertainment while they functioned as unmonitored social spaces where children interacted with strangers. Even TikTok, initially dismissed by adults as merely a dance app, quickly became a primary source of political information and identity formation for many teenagers. In each case, the sanitized explanation masked a more complex reality.
The Problem With AI Companions
AI companions could provide a more private and messy space than social media ever has. Why?
With no visible posts, no friend networks and no public footprint, these one-on-one conversations leave no trace for concerned adults to discover. When a student claims they're only using ChatGPT to help with homework, we have virtually no way to verify this is the full story. The academic use may be entirely truthful, but it may also be just the tip of a much deeper relationship developing behind closed digital doors.
The friendly tones, pseudo empathy and listening capabilities of AI companions could create unprecedented risks of dependency and social isolation. Unlike social media, which at least mimics human interaction, AI relationships exist in a psychological vacuum, offering constant validation without the friction of real human relationships that build crucial social skills. Young people struggling with peer acceptance may find AI's always-available, never-judging presence irresistibly comforting.
This could lead to withdrawal from more challenging human connections. This gradual substitution of AI interaction for human relationships represents a fundamentally different risk from previous digital tools. It doesn't simply distract from social development but potentially replaces it with an addictive simulation that feels emotionally safer but leaves critical interpersonal skills undeveloped.
Second, AI companions lack human judgment. Where teens might hesitate to discuss certain topics with peers or adults for fear of embarrassment, ridicule or consequences, AI systems respond with consistent engagement regardless of subject matter. This judgment-free zone becomes particularly appealing for exploring identity questions, emotional struggles or controversial viewpoints that young people might otherwise keep to themselves. The combination of complete privacy with unconditional acceptance could create a uniquely seductive environment with few parallels, leading to greater psychological impact.
Conversations We’re Not Having
Maybe the focus needs to change, or maybe it’s too soon to change direction on an already complex landscape. What’s unfolding with AI companionship among young people doesn’t fit neatly into behavior policies or digital literacy lessons, while many people still deny the realities of life with AI.
Yet, educators and parents alike need to be aware that having conversations with young people, giving them a safe space to talk about how they are using AI and what they are using it for and educating them on the risks in seeking companionship from generative AI tools is a big deal. This isn’t just about AI. It’s about identity, vulnerability and the need to feel heard, often in the absence of anyone truly listening.
The truth is, if the most meaningful or honest conversations a student is having are with a chatbot, and we’ve created no space to talk about that, educators and parents might be missing the point. Whether that’s because we’re unaware or too busy asking the wrong questions, it doesn’t help the situation.