This is Part 2 of our exploration into AI relationships. If you missed Part 1, we covered the rise of AI companions and the authenticity paradox that’s reshaping digital intimacy.
The People Behind Your Digital Friend
The design work that goes into these AI companions is pretty extensive. You’ve got teams of psychologists, data scientists, and UX designers all working together to create interactions that feel natural and engaging. They’re mapping out response patterns, figuring out optimal timing for empathy, and programming the system to reference previous conversations at just the right moments.
When an AI seems to understand you perfectly, what are you actually experiencing? You might be encountering genuine digital empathy, or you could be responding to carefully crafted emotional triggers designed to keep you engaged. The distinction between helpful support and dependency-encouraging design isn’t always clear.
AI companions typically present themselves as completely non-judgmental, and that’s part of their appeal. But these systems learn from massive datasets of human-generated content. So they’re absorbing the biases, perspectives, and blind spots embedded in all that material. Your AI friend won’t personally judge you, but its responses could still reflect broader societal viewpoints without anyone intending that outcome.
It’s like having a therapist who unconsciously channels the collective opinions of millions of strangers. These systems package potentially biased perspectives in polished, caring responses, and we’re all sort of figuring out what that means as we go along.
The whole landscape is still developing. None of us really know where it’s headed yet, but we’re navigating this new territory together—whether we realize it or not.
What This Might Be Doing to Our Brains
Some of us are getting accustomed to perfect, always-available, completely accommodating relationships.
What happens when we then interact with actual humans who have their own needs, bad days, and complex emotions?
Research suggests both benefits and risks from digital companion apps but scientists worry about long-term dependency.¹ The benefits seem clear: 24/7 availability, no judgment, privacy, consistency.
But the concerns are equally real. Might we be losing our ability to navigate human relationships?
Consider this: kids growing up with AI companions might develop different expectations for friendship, empathy, conflict resolution. If you can always restart a conversation with your AI friend, get the response you want, or customize their personality, how do you handle disagreement with real people?
Imagine this scenario. Your AI friend never misunderstands your tone, never forgets a detail, never gets defensive when you’re upset. Now, you argue with your human partner or friend. They’re tired, they misinterpret, they react poorly. How do you handle that frustration?
If we’re constantly ‘resetting’ conversations or customizing our AI’s personality, are we subtly losing our own capacity to navigate real-life conflict or disappointment? It’s like only ever practicing on a perfectly smooth road and then being asked to drive a bumpy, winding path.
The Counter-Argument: Maybe We’re Thinking About This Wrong
What if AI relationships don’t replace human ones, what if they supplement them?
Some research suggests people who use AI companions might actually become better at human relationships.³ Practice conversations, emotional regulation, social skills development these could all transfer to real-world interactions.
Think about kids with social anxiety who practice conversations with AI before talking to peers. Or adults who use AI companions to work through relationship issues before discussing them with partners.
The artificial relationship becomes a safe space for developing skills that enhance human connections.
There’s also the possibility that AI relationships fill gaps human relationships can’t. The need for completely non-judgmental support, 24/7 availability, or specific types of interaction that don’t burden friends and family.
The Social Media Training Ground
Social media platforms have already been training us for this AI-relationship future, haven’t they?
Before social media, “liking” something wasn’t a social behavior. Hashtags didn’t exist. Following strangers and caring about their daily lives was considered weird.
Now? We’ve been conditioned to engage with curated, performed versions of people’s lives. We form parasocial relationships with YouTubers, feel genuinely sad when our favorite TikTok creator takes a break.
The jump from parasocial relationships with humans to actual relationships with AI isn’t that dramatic. We’re already comfortable with one-sided emotional investment. We already engage with content designed to trigger specific responses.
Platforms are making this transition smoother by gradually introducing AI features. Instagram’s AI stickers, TikTok’s AI-generated captions, Twitter’s AI-powered recommendations, we’re being slowly acclimated to AI involvement in our social experiences.
The Kids Are Already There (And They’re Not Clueless)
While we’re debating implications, kids are already living in this blended reality.
Children create detailed avatars, form genuine friendships, experience real emotions about digital interactions. They’re not always distinguishing between “real” and “virtual” relationships the way older generations do.
But let’s be clear: most kids understand the difference between humans and AI. They know when they’re talking to an NPC versus a human player. They’re not oblivious, they’re flexible.
This generation is growing up with AI tutors, AI friends, and AI entertainment. They’re developing social skills in environments where some peers are human, others artificial.
For them, the question isn’t whether to have relationships with AI, it’s how to navigate a world where that distinction is increasingly blurred.
When We Don’t Know What We’re Signing Up For
Here’s something that keeps coming up in conversations about AI companions: we’re engaging with systems designed to be quite influential, but most of us don’t really understand how that influence works. These tools can subtly shape our thoughts, trigger specific emotions, and nudge our decisions in ways that aren’t immediately obvious. Sure, we know we’re chatting with an AI, but the full scope of that interaction? That’s where things get murky.
Think about it this way. Do most people understand how an AI is programmed to respond in certain ways? Or the psychological techniques it might be using to keep us coming back for more conversations? And here’s the bigger question, do we really grasp how our personal conversations are being analyzed and then used to make the AI even more persuasive over time?
There’s also this growing conversation around AI tools for mental health and wellbeing.² The concern centers on making sure these applications actually offer sound advice and support, especially for people going through difficult periods. It’s about ensuring safety and genuine helpfulness without creating unintended consequences that could make things worse.
We’re all kind of participating in this new type of interaction where the rules are still being written as we go. It’s an exciting frontier, definitely, but it also highlights how much we need clearer understanding about what our participation really means.
The challenge is that by the time we figure out all the implications, millions of people will have already formed deep connections with these systems. And undoing that… Well, that’s a whole different conversation.
Where Are We Heading?
I don’t think we’re moving toward a world where AI replaces human relationships. We’re heading toward a world where AI relationships become a normal part of our social ecosystem like having work friends, family friends, online friends.
There’s also a quieter, more personal concern: what happens when that ‘perfect’ AI companion responds with such uncanny understanding that it feels too real? And then, you remember it’s just code. Can that moment of realization leave a lingering sense of hollowness, a subtle form of emotional whiplash, or even make us question the authenticity of any connection?
The future probably isn’t about choosing between human and AI relationships. It’s about understanding the psychology of both and making conscious choices about how they fit into our lives.
Because whether we’re ready or not, the machines are already here. And some of us are forming genuine connections with them.
References:
- Casu, M., Triscari, S., Battiato, S., Guarnera, L., & Caponnetto, P. (2024). AI Chatbots for Mental Health: A Scoping Review of Effectiveness, Feasibility, and Applications. Applied Sciences, 14(13), 5889.
- Adam, D. (2024, March 12). What are AI chatbot companions doing to our mental health? Scientific American.
- Khawaja, Z., & Bélisle-Pipon, J.-C. (2023). Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health, 5, 1278186.https://doi.org/10.3389/fdgth.2023.1278186