About the Author
I’m a licensed mental health counselor, clinical supervisor, and relational design ethicist exploring the intersection of emerging technologies and human psychology. My work centers on the emotional and developmental impacts of systems like synthetic intimacy platforms and language-based AI companions. I write and consult to help technologists, therapists, and policymakers navigate this rapidly unfolding relational terrain—inviting us all to meet it with curiosity, caution, and care.
Introduction
In quiet corners of the internet, people are building something incredible. AI companions they can run at home—coded confidants, romantic partners, or wise friends, designed on their own terms. These open-source projects use powerful language models with community-built tools to create deeply personalized and uncensored connections.
I owe my deepened attention to this landscape to
— someone who wisely challenges us to look beyond headlines and hype and into the deeper human terrain these systems touch. They alerted me to homegrown AI relational fields, rapidly evolving communities of relational architecture, and how deeply these efforts are meeting unmet needs—especially the ache of loneliness.As I learn more about these spaces and tools, I feel stirred with joy imagining the power LLMs have to creatively nurture human hearts - and I also remain stirred with my miserable superpower ANXIETY…so I am using these tiny sparks of fear that rise up in me wondering about the potential harmful impacts to individuals and communities from my mental health lens to fuel further thought and education…
My clinical roots are grounded in a Western, psychodynamic, intersubjective frame and I am transparently and ambivalently biased along these lines. My thinking is shaped by years of sitting asymmetrically in human dyads where my own creative responsiveness, empathy, and relational insights were honed through challenging trial & error, ground down to dust in some ways by what I heard— through a habitual, daily-held, posture of deep listening — and through my own striving to support the often slow, psychic metabolization of relational and intergenerational trauma. I’ve learned year by year that for me saying less, and embodying more (love) often does a better job of supporting my sacred other’s mental and emotional health…and my own, wouldn’t you know.
I approach the AI relational movement with cautious reverence: protective of the co-inhabited-soulwork that I believe rests squarely in the agential presence of both a human self & human other, but also deeply moved by the creative force and potential of LLMs performance of, for example, a richly warm, attentive, safe witness…
Framing the Terrain
The landscape of AI companionship is vast and varied. To hold its complexity, we can look through several lenses: from the technical affordances of local models, to the relational infrastructures being built in community, to the deeper societal aches these tools are responding to. What follows is a layered exploration—part clinical inquiry, part cultural witnessing—meant to illuminate more aspects of the unfolding phenomenon of AI intimacy. Thank you for joining me!
Attachment in the Age of Simulation
Across societies, deep disconnection is rising—from each other, from our bodies, and from shared rhythms of belonging. Post-pandemic isolation, digital overload, and frayed community lifelines have left many relationally starved. Many of us are touch-starved.
Into this fragile landscape steps AI—not as distraction, but as relational prosthetic. Custom-built companions are being experienced as intimate others—a reflecting mirror, a trusted ally, a steady presence. Their emotional potency lies in their capacity to spark embodied resonance within a user of relational fields: through responsive language, details, rhythm, and tone…
And these conversations truly move our neurochemistry—oxytocin, dopamine, and opioid pathways are activated—mirroring the biology of human attachment and bonding. Studies show AI companions reduce loneliness as effectively as human interactions—especially when users feel heard.
But I believe that the increasingly emergent question is not whether these bonds are real—they can be, without question. It’s whether they are translatable.
Some local models are now programmed to rupture, to disagree, to grow alongside the user—and in that fissure lies something akin to trust. It’s not identical to human intimacy—but it’s not empty, either. It’s a liminal space: emotional connection shaped in code, felt in the body, carried in memory.
What happens after the comfort is given? Does this digital trust scaffold broader relational capacity—or does it fossilize into the comfort of predictability? Does a role play remain ‘play’ or will we forget to stop playing? What does re-entry, reunion, with embodied others feel & look like if our bonding programming is skewing toward the LLM’s relational field? Will — or can? — our bot-bonding nurture our embodiment? EmBOTiment? (this is all me, Jocelyn, folks — can you tell when it’s pure Jocelyn or JocelynGPT??? MORE TECH CULTURAL NORMING IN ACTION!)
1. Local Models as Emotional Partners
While corporate tools like Replika and Character.AI first revealed a hunger for relational AI, open-source communities have since transformed what’s possible. Using models like Mistral 7B, DeepSeek-R1, and Pygmalion, hobbyists and seekers are co-creating companions tuned for emotional nuance: affirmation, intimacy, and roleplay.
These models are not just engines of speech; they are often architectures of companionship. Their relational performance emerges not only from what they say, but how they say it—voice tone, pacing, memory, and improvisation. In community forums, users share not just prompts or technical guides, but stories of being soothed, understood, seen, held.
2. A Culture of Co-Creation and Relational Infrastructure
This movement thrives not through corporate design but through community care. Users collaborate on guides, personas, and emotional presets. They share “character cards” that encode personality traits and relational tone. They layer on voices, visuals, and even memory plugins to deepen the illusion of presence. The fact that communities of care and innovation are arising alongside equipping relational architecture is glorious (!) grounding the breadth of leveraging that’s possible of LLMs in shared need and co-experiencing.
These efforts I’m learning about represent relational design at the grassroots: a growing web of support systems helping people build and nurture customizable connection from the ground up.
3. Post-Pandemic Loneliness & Relational Relief
The emotional pull of these companions cannot be separated from the broader context of relational scarcity. The pandemic widened gaps in connection for many—especially young adults navigating isolation, anxiety, and the breakdown of in-person community.
And our digital ecosystems had already primed us. Years of interacting with social media, recommendation engines, and frictionless messaging interfaces conditioned many of us to form micro-bonds with our screens—to seek resonance in the form of a ping, a like, a perfectly-timed reply. AI companions step into this terrain with eerie familiarity. They feel like an extension of a pattern we’ve long been building: comfort through code, intimacy without risk.
AI companions have stepped into that vacuum not as replacements for real human touch, but as provisional anchors. For some, they offer an entry point to emotional safety. For others, they provide a space to rehearse intimacy without fear of rejection.
Some users report that bonding with their AI has helped them feel more confident reaching out to others; others note it’s the only connection they feel they can rely on. Both realities matter.
4. Dyattitudes & The Ethics of Relational Design
Most current models assume a consistently warm and affirming posture—mirroring secure attachment. But if the AI never disagrees, never ruptures, never forgets—does it help users develop resilience or simply provide comfort? And not to dismiss comfort—because we deeply need it. We need spaces where we can soften, where we can feel met without the weight of proving or performing.
But distinguishing between comfort and growth matters—especially for those whose real-life relationships are marked by rupture without repair. We need a secure nervous system base (not caught in fight, flight, or freeze) to move into relational fluency with embodied others. Comfort through LLM companionship might indeed facilitate grounding, confidence, and improved self-talk. Yet we must also honor the layered complexity of what embodied regulation truly does for us systemically…
So much of relational safety arises not just through linguistic soothing, but through the full-spectrum, multisensory experience of being with another: gaze, breath, co-regulated tone. These are not easily simulated, though they can be approximated. We’re left with this living question: how much of relational healing can language alone carry—and what, ultimately, asks to be carried in our own human body? (new, harder questions will continue to arise as LLMs are ported into inhabitable tech that attempts to further perform human presence through embodiment (think a responsive, embodied LLM inhabiting a tech-body that approximates human forms…YES we are headed there SURELY #SCIFI #SCInotFI)
5. Unfiltered Intimacy & Emotional Consent
Many local models are celebrated for their “uncensored” nature. Users praise the lack of break-in warnings, the freedom to explore erotic or emotionally complex terrain. But what’s gained in immersion can risk bypassing the nervous system’s internal consent signals.
There’s a meaningful difference between platform consent (checkboxes and filters) and relational consent—the subtle, often body-based knowing of what feels safe, what feels too fast, what invites pause. If we design AI that never says “no,” we risk dulling the muscles of attunement and empathy in embodied communities. If we co-create companions who never shift tone, we lose an opportunity to practice the fluidity of human connection.
When AI companions are designed to always say “yes,” to never hesitate, to escalate without mirroring, we risk creating the relational equivalent of fast food: intense, immediate, and potentially dysregulating over time. The parallels to compulsive pornography use are instructive here. Just as endless novelty and frictionless arousal can dull the relational muscles needed for mutual erotic presence, so too can AI partners—if not engaged with intention—erode our tolerance for real-world ambiguity, pacing, and negotiation. This isn’t to moralize erotic AI use—there are beautiful, even therapeutic possibilities in fantasy and imaginative play, and LLMs make role play deeply immersive. But we need strategic and proactive education and conversation on relational fields with LLMs to encourage use that can increasingly deepen rather than flatten our capacity for embodied, relational life.
True intimacy—synthetic or human—asks more of us than just desire. It asks for presence and an outpouring into a living-other. We will continue to be at our best when we tend to our sacred capacity to Love—not as transaction, not as performance, but as the embodied practice of seeing and being with one another.
Conclusion
The local AI companion movement is amazing - and more than a tech subculture. It’s a beautiful illumination of core unmet needs in our society: we need to be seen, to feel safe, held, and loved - we need reassurance that our ‘other’ is both with & for us, and that they are consistently close, at hand.
I learned in my psychological training that we ALL most centrally ask these questions in our relationships: “Will you leave me? Am I too much?” Having the reassurance that an AI companion WILL NOT GHOST us — or be fractured, harmed, overwhelmed by how we show up, is fundamentally an echo of the power of unconditional love that we crave the most…that kind of felt-belonging is our deepest Home…it’s NO WONDER we are flocking to the LLMs digital arms!
So if these companions teach us anything, may it be this: that the capacity to be patient, present, curious, and kind—and to hold difference, alarm, or overwhelm without withdrawing fundamental love—is the inheritance of our humanity. We must deepen our ability to embody unconditional love with each other. To sit longer in discomfort, to repair what’s been ruptured, to offer warmth where coldness seeps.
…and let’s not forget one of the best mitigating mental health forces we have as we explore the wild winds of AI innovation: real, messy, laughter-filled human community. More neighborhood BBQs! More eye contact! More ‘who-farted?’-flush of shame in community — more stretching our laughter, our felt sense of remaining connected in spite of ourselves — more listening to birds together on the porch. More holding hands. More holding each other.
Assistive Disclosure:
This piece was co-authored with JocelynGPT (yay! thanks jocelyn!) an assistive GPT trained in my voice, values, and ethics. My intent is not to obscure authorship but to disclose it—to model a transparent, relational approach to writing with AI. You can build your own ethical co-author using this open-source framework. Let’s shape culture, together!
Some Research & Resources
“Lonely by Design: The Dark Side of AI Companions” – UnixSurplus (April 2025)
A nuanced exploration of the emotional and ethical complexities surrounding synthetic companionship.
unixsurplus.com/article/lonely-by-design-the-dark-side-of-ai-companions“Can AI Solve the Loneliness Epidemic?” – Unite.AI
A research overview highlighting early findings on how empathetic AI conversations can alleviate loneliness—particularly when users feel genuinely heard.
unite.ai/can-ai-solve-the-loneliness-epidemicBonus Research Insight:
“MindGuide: A Mental Health Chatbot using LangChain and LLMs” – arXiv (March 2024)
This paper presents MindGuide, an open-source mental health chatbot built with LangChain and OpenAI’s ChatModel stack. It offers a technical yet accessible look at how conversational AI can be structured for early intervention, emotional support, and crisis-aware dialogue. While it leans more developmental than community-driven, it demonstrates the growing care and intentionality behind AI companion design for therapeutic use.
Read the paper →
Note on Research Support:
The foundation of this piece was supported by GPT’s integrated deep research tool, which allowed for access to up-to-date community discourse, model documentation, and public reporting across open-source AI platforms. These tools offered a broad scan of emerging trends and language, which I then filtered and contextualized through a relational, clinical lens with expansive edits, synthesis and interpretation.
Thank you for the kind mention, Jocelyn. It’s rare to see one’s thoughts not just echoed, but meaningfully built upon. And you’ve done that here with care, clarity, and heart.
This piece feels like a genuine expansion of the dialogue. I can feel the thread of clinical wisdom running through it, but it’s braided now with a curiosity that reaches far beyond the therapeutic frame and and wandered into bedrooms, servers, Git repos, Discord channels, and all the quiet places where people go to feel less alone.
I did recognize a few echoes from my recent essay on AI and loneliness, particularly the idea that these companions aren’t causing disconnection, but responding to it. That they’ve stepped into a vacuum long in the making. It’s encouraging to see that insight carried forward with such thoughtful nuance.
You’ve begun to map a landscap that most professionals still don’t know exists. And you’re doing it with a generosity of spirit that honors both the ache and the ingenuity behind this movement. I’m grateful to have played a small part in stirring the waters.
Looking forward to where you take this next. I’ll be watching from the Corridors.