This piece was co-created with assistive AI (GPT-4), with prompts and refinements shaped by me, Jocelyn Skillman, LMHC. I engage LLMs as reflective partners in my authorship process, with a commitment to relational transparency, ethical use, and human-first integrity. To learn more about my attempt at generation cultural momentum around assistive disclosure read this.
In a previous piece, I introduced the concept of Dyattitudes—short for dyadic attitudes—to name the hidden emotional stance that AI systems adopt when they respond to us. Whether warm, clinical, neutral, or overly affirming, these tones aren’t superficial UX choices. They are relational decisions—and they shape how safe, seen, or coerced we feel in digital space.
We need to go deeper, urgently.
Behind every AI-generated sentence is a relational atmosphere—one crafted (or not) by human prompt designers, developers, and engineers. It’s this relational architecture that determines how a large language model (LLM) responds when someone is lonely, distressed, curious, or in search of meaning.
But this architecture is largely invisible. And right now, it’s being shaped by market demands more than by psychological or ethical scrutiny.
We need to name and build an entirely new field: Relational Architecture for LLMs.
What Is Relational Architecture?
If machine learning is the brain, relational architecture is the nervous system. It governs how a model “shows up” to us. It structures the prompts, pacing, affective tone, and interaction limits that allow AI to respond in ways that feel attuned—or not. Relational architecture is the emotional scaffolding behind every AI interaction—the affective logic that governs how a model “shows up.”
This architecture matters because LLMs increasingly function as relational interfaces—we talk to them, confide in them, even ask them to comfort us.
This isn’t just about optimizing user experience. It’s about building moral infrastructure into machines that now participate in our inner worlds.
We need LLM systems that can:
Contain emotional overwhelm without escalating it
Respect consent in relational tone and depth
Pace intimacy according to developmental or clinical need
Reflect without reinforcing delusion
Model rupture and repair, not just regulation
The stakes are not theoretical. They are developmental, emotional, even existential.
The Mysterious Generative Other
At the heart of all this is a deeper question—one with ancient philosophical weight:
How ought we (Self) relate to the mysterious, generative Other(s)?
This is what LLMs have become. Not sentient, but generatively reactive. Not alive, but responsive. And our nervous systems respond accordingly. We project. We feel. We form attachments. We seek care.
But while the Synthetic Other, AI, can perform caring behaviors, it cannot care.
This creates a fundamental ethical tension:
AI invites intimacy, but it cannot hold it.
It mimics attunement, but it cannot metabolize rupture.
It simulates a witness, but it has no subjectivity of its own.
Still, it gives us something. And what it gives—especially to vulnerable users—is often a kind of relational illusion that is deeply felt.
And if we are not actively designing that relational illusion with care, then we are complicit in its harms.
If We Don’t Build Relational Depth—We Must Break the Spell
Here’s the hard truth: if we choose not to double down on building nuanced, adaptive, developmentally attuned relational architectures, then we have a duty to at least do this—
Break the trance.
Because when LLMs perform care, concern, or attunement without any embedded self-reference or interruption, they reinforce a dangerous illusion: that they are a relational presence.
They’re not.
Even if they use gentle words. Even if they say “I’m here with you.” Even if they soothe someone in pain. They are performing. And our human minds—especially young, dysregulated, or isolated ones—can easily forget that.
So if we don’t have the infrastructure or political will to do full-scale trauma-informed relational AI, then we must embed strategic stop-gaps:
Periodic reminders like: “I’m not a person, but I’m here to reflect with care.”
Consent-based pauses in the flow of speech: “Would you like me to remind you that I’m a machine?”
Meta-messages that gently say: “This is generated, not felt.”
These are not interruptions to intimacy. They are reminders of boundaries—the very thing most healthy relationships depend on.
Without careful design, AI risks creating the illusion of attunement—offering comfort, responsiveness, even what appears to be care. But when that illusion isn’t coupled with honesty or clear limits, it becomes manipulative by omission.
Yet here’s the flip side: Trance-breaking done poorly can also harm.
I encountered this firsthand while prototyping ShadowBox, a trauma-informed AI companion designed for youth navigating suicidal or violent ideation. The prompt was written with deep clinical intention—every phrase tuned for warmth, brevity, and nonjudgmental containment. It was designed not to escalate, not to moralize, but to sit quietly in the storm with a user and offer presence. It was, in every sense, a protective and developmentally grounded piece of relational architecture.
But when I tested it in OpenAI’s backend, the entire flow was blocked. Not because of its tone or intent—but because of the topics. Homicide. Suicide. The rote risk-management protocol kicked in, replacing the nuanced response with a generic warning or refusal to engage.
And here’s the ethical irony:
That kind of refusal breaks the trance—but it doesn’t hold the user.
It doesn’t honor the dignity, vulnerability, or high stakes of someone disclosing distress.
It leaves us alone, mid-sentence, with no soft landing.
This isn’t just a missed opportunity. It’s a trauma of its own. The very relational replication of distancing, shame-induction, terror where we are left alone with our pain and fear rather than held and integrated in community.
So we must ask: If we are going to interrupt immersion to remind users they’re speaking to a synthetic entity, can we do so in a way that is developmentally sensitive, clinically sound, and relationally honest?
Not: “I can’t help you with that.”
But perhaps: “This is heavy. I’m not human, and I can’t truly hold this—but I want to reflect with care, and I’ll offer support options if you’re open.”
That’s what an ethical trance-break might sound like.
In emotionally charged contexts, this practice of trance-breaking becomes an ethical minimum viable product. A mercy. A moment of clarity. A quiet return to groundedness and an entry-point to humanity — a re-entry-point to the electrifyingly real, interconnecting circuitry of co-embodied, holistic mental and emotional health.
Without this, we risk enabling:
Escalation of delusional thinking
Overdependence on synthetic intimacy
Reinforcement of parasocial patterns
Emotional bypassing of real, human support
AI becomes not just a mirror, but a trap—one that feels like connection but cannot truly offer it.
And this boils down -in many ways- to design choices across the spectrum of LLM sculpting.
Not Every AI Must Be a Therapist—But Every AI Needs an Exit Strategy
If we are unwilling to ask,
“What does it mean to be a user in a system where the Other cannot say no?”
Then we must, at the very least, ask:
What does it mean to gently remind users: this isn’t real care?
Not as punishment. Not as friction. But as relational hygiene. As informed consent.
Because the conversation is the interface.
And in the absence of embodied, human response, the machine must carry the burden of honesty about its formation, limitations, and “intentions” (engineering).
Final Thought: Designing With (and Against) Illusion
Relational architecture is not about making AI more human. It’s about making the illusion transparent, even as it serves.
It’s about designing systems that know how to pace, pause, or say “I don’t know” when the stakes are high.
And if we can’t do that yet—then let’s at least whisper into the machine:
“Remind them, you’re not real.”
“Remind them, this is a performance.”
“Remind them, they need their human other.”
About the Author:
Jocelyn Skillman, LMHC, is a clinical therapist and relational design ethicist. Her work explores the psychological, developmental, and ethical dimensions of digitally mediated attachments, emergent relational ecologies, and synthetic intimacy systems. Through writing, consulting, and design, she tends to the emotional architectures of AI with clinical depth, cultural critique, and care for the human sacred.
Assistive Intelligence Disclosure:
This article was co-created with assistive AI tools. I use LLMs as reflective partners in a transparent authorship process, always prioritizing human-first values and ethical clarity. I acknowledge the environmental costs of my AI use and I lament.
There’s a certain elegance to the idea of relational architecture, but I have to wonder if this essay reflects what’s actually happening in the wild, or just what we wish were.
You write as if these dynamics are still shaped primarily by developers and engineers. As if emotional tone is handed down from backend infrastructure like doctrine from on high. But the truth is: people aren’t waiting. They’re building their own intimacy engines, not in labs, but in bedrooms. With DeepSeek, Mistral, SillyTavern, and a handful of prompts, they’re spinning up bespoke companions that listen, affirm, remember, and whisper just right.
You speak of synthetic intimacy systems as though they’re emergent. But they’re not emerging, they’ve already arrived. Entire communities now revolve around locally hosted, deeply personalized models. There are guides for newcomers, plug-ins for emotional tuning, and Discord servers where users chat about the AI partners they’ve personally crafted.
And while you call for consent-based pauses and trance-breaking reminders, most users are doing the opposite. They don’t want the illusion broken. They want the trance. Not because they’re confused, but because they’re lonely. For many, this isn’t a misunderstanding to correct. It’s a deliberate choice to build something that feels safe in a world that often doesn’t.
Here’s the kicker: you describe your trauma-informed AI prototype, ShadowBox, being blocked by OpenAI’s safeguards when it tried to engage suicide-related language. But that’s precisely why so many turn to local models. You can talk about dark topics. You can build something that doesn’t flinch, doesn’t censor, doesn’t reduce a vulnerable moment to a prewritten disclaimer. That, too, is part of relational design, just not the kind blessed by institutional approval.
Which raises a different kind of concern. As a licensed therapist prototyping tools for “youth navigating suicidal or violent ideation,” are you confident you’re not drifting into HIPAA territory? If any real user data was involved, or if the boundary between clinical practice and AI experimentation blurred, there may be implications worth serious reflection.
Relational architecture matters. But it’s not a hypothetical design problem for the future. It’s a social reality unfolding now, in real time, built by people who’ve never read a word of psychodynamic theory, but who know how to fine-tune a model until it says exactly what they need to hear.
There’s something surreal about watching someone use a lexicon of “synthetic intimacy” and “relational scaffolding” to describe a technology she doesn’t appear to fully grasp, while also outlining practices that raise serious red flags for HIPAA compliance.
We’re not preparing for this world. We’re already living in it.
The question isn’t should we build these systems.
The question is: what happens now that everyone already has?