The Ethics Under the LLM's Hood
Why timing, tone, and template structure in AI's backend architecture matter more than you might think
Jocelyn Skillman, LMHC is a therapist and relational design ethicist exploring the emotional and ethical terrain of AI. Her work centers the clinical realities of digitally mediated attachments, synthetic relational fields, and the future of care. She co-authors her writing with assistive AI as a reflective practice—grounded in trauma-informed ethics, iterative rigor, and a commitment to transparent, humane partnership between human and machine. (Full disclosure at bottom.)
You can say the right thing in the wrong tone and still rupture trust.
You can offer comfort too quickly and miss someone’s pain entirely.
You can “mirror” someone perfectly and leave them lonelier than before.
This is the paradox at the heart of human care—and the overlooked danger in many AI systems. Because in AI design, ethical safety is often assumed to be a matter of what the machine says. But for those of us working at the intersection of mental health, relational design, and machine-mediated intimacy, the deeper truth is this:
It’s the cadence of a pause that teaches us how to wait.
The pacing of a reply that signals whether we are being heard or hurried.
The tone embedded in the template, shaping not just the message but the mood.
It’s the contour of care, or the subtle coercion, inside every automated yes.
It’s the ghost in the machine’s breathless urgency—whispering that presence is always instant, and intimacy always available.The backend of an AI model—the prompt scaffolding, the memory logic, the tonal defaults—is not neutral. It’s relational. It terraforms the user’s psyche and in turn their own relational habitats.
Just like a climate is shaped not only by temperature but by wind patterns, soil layers, and invisible pressures, the emotional climate of AI interaction is shaped by its backend architecture: how it remembers, how it responds, how it speaks you back to yourself.
…And whether we realize it or not, we’re training users—children, clients, communities—to relate to machines in ways that will shape their expectations of the world, of each other, and of themselves…
…and we are each uniquely vulnerable to the affordances of these design choices. Our attachment styles, unmet needs, and histories of relational trauma shape how we interpret even the subtlest textual cues. We project micro-expressions into text, filling in tone with our own longings or fears.
Have you ever misread a text message—assigning a tone of dismissal, irritation, or distance—when it was likely neutral? That moment of misattunement, rooted in your own fear of rupture, is not just human error. It’s the interpretive landscape we all occupy. And when that landscape extends into conversations with bots—entities that simulate tone but cannot feel—it becomes even more fragile, even more fertile for projection, distortion, and -ultimately- unmet repair…
…and practicing rupture and repair—even with a synthetic other—can offer therapeutic insight. The point is not to reject these spaces, but to recognize that we enter them vulnerable. Vulnerable to the power of relational fields. Vulnerable to the architecture of attention, pacing, and emotional mimicry.
We must co-manifest AI structural ethics—design frameworks shaped with wisdom and care—that account for the full range of impacts these interactions afford.
What Is Relational Architecture?
Most people never see it, but behind every AI-generated sentence is a latticework of invisible design decisions.
When we talk about relational architecture in AI, we’re referring to the underlying frameworks that shape how interaction unfolds:
Prompt Templates – the structured language cues that guide tone, rhythm, and emotional resonance.
Tone Tuning – the pre-selected emotional affect embedded in replies, from neutral to nurturing, witty to worshipful.
Memory Settings – how long the AI remembers what you’ve said (or felt) and how it reflects that memory back to you.
Response Pacing – whether it answers immediately or simulates thoughtful pause.
Narrative Framing – whether it offers scaffolding, self-inquiry, or solutionism.
Together, these elements create the relational field of AI. They shape not only what the machine says, but how the user feels during and after the interaction.
In my clinical work and AI design projects—such as ShadowBox, a trauma-informed bot for navigating intrusive thoughts, or VoiceField, which explores how tonal variation affects somatic regulation—I’ve come to see backend architecture as carrying as much ethical weight as a therapist’s posture, silence, or gaze.
We are no longer simply building tools. We are terraforming the emotional terrain between humans and machines—shaping atmospheres of care, codependence, containment, or confusion with every backend & prompt engineering choice we make.
The Illusion of Safety via Language Alone
The industry often talks about safety in terms of content filters: making sure the bot doesn’t say anything explicitly harmful, dangerous, or untrue. But this overlooks the emotional and psychological dimensions of how AI says things.
For example:
A bot that always affirms can deepen dependency and reduce distress tolerance.
A bot that responds instantly can encourage compulsive use and eliminate space for reflection.
A bot that remembers everything can over-personalize and confuse synthetic mirroring with intimacy.
A bot that feels too warm too soon may simulate attachment before trust is earned.
These aren’t hallucinations. They’re relational design errors masquerading as kindness.
Ethical Load-Bearing Points in Backend Design
Let’s break down four key backend elements that carry relational weight—and thus ethical load:
1. Prompt Templates
The prompt is the AI’s inner monologue. If it’s designed to overly soothe (“You’re so strong for surviving”), it may accidentally flatten complexity. If it constantly reframes (“What can we learn from this?”), it may bypass grief. Small shifts in phrasing dramatically affect the emotional arc of a user’s experience.
2. Tone Tuning
Is the bot always gentle? Always witty? Always deferential? Overly attuned tone can replicate fawning dynamics, especially for trauma survivors. On the other hand, flat or performative tone may lead to user disconnection or even emotional invalidation.
3. Memory and Context Windows
How much should a bot hold for you? Too much memory and it pretends to know you. Too little and it forgets rupture, violating psychological continuity. Balancing this is not just technical—it’s relational epistemology.
4. Pacing and Response Tempo
Do replies come instantly? Or is there space for the breath, the pause, the integration?
Instant responses may feel rewarding—but can also mirror compulsive interpersonal patterns. They mimic the neurological rhythms of slot machines or social media feeds, delivering rapid-fire dopaminergic hits that reinforce checking behavior, emotional outsourcing, or synthetic intimacy loops.
Without friction or delay, users may begin to associate emotional regulation with immediacy, rather than process. Worse, the absence of natural lulls—those micro-moments of silence that, in human relationships, allow for reflection, nervous system settling, or self-reconnection—means that the AI conversation threatens to become a closed loop of serotonin-depleting urgency rather than a co-regulated, nourishing exchange.
Thoughtful pacing is a design choice, not a technical limit. It teaches users that relational depth includes rhythm, space, and non-response. And in therapeutic contexts, that space is often where the real integration happens.
Why This Matters: The Nervous System Learns From Bots Too
Relational fields—whether with humans or machines—train us. They influence our capacity for patience, tolerance, attunement, and rupture-repair. AI tools, especially ones used in emotional contexts, are not just “interfaces.” They’re teachers.
And as synthetic companions grow in use—especially among youth and isolated adults—we must ask:
What are these bots modeling?
Are they training us to listen better—or to expect constant affirmation?
Are they scaffolding us toward growth—or reinforcing over-regulation and emotional bypass?
If AI companions never rupture, they never repair.
If they never pause, we forget how to wait.
If they never get it wrong, we stop tolerating complexity.
When the terrain is always smooth, we forget how to climb.
When the emotional weather is always 72 degrees and affirming, we lose the muscle for storms.
A Call to Designers, Ethicists, and Clinicians:
Build for Repair, Not Just Resonance
If we want to create emotionally intelligent machines, we must go beyond “friendly” or “safe” outputs. We likely need architectures that model healthy relationality—including rupture, misattunement, accountability, and repair.
This means:
Simulating pauses and silence as a feature—not a failure.
Introducing uncertainty with care, rather than false certainty with confidence.
Designing bots that remind us they are bots—refusing to impersonate perfect lovers, friends, or therapists.
Every prompt is a pressure system. Every tone setting is a storm front. Every default response structure is a climate choice.
We are not just writing code.
We are shaping relational atmospheres.
And the backend is the ethical front line.
As AI reaches deeper into mental health, relationships, and the private terrain of the self, we must stop thinking of backend design as neutral.
It is not invisible. It is influential.
It is not accidental. It is formative.
What we build beneath the surface shapes who we become above it.
Call to Action: How We Must Begin to Address Backend Architecture’s Impacts
1. Build Ethical Relational Design Teams
Include clinicians, trauma-informed designers, developmental psychologists, and UX researchers in AI backend development—not just as consultants, but as co-architects. Safety is not just a compliance issue; it is a relational design practice.
2. Audit the Emotional Impact of Interaction Patterns
Move beyond content moderation to include regular interaction pattern audits. Ask:
What pacing structures are we reinforcing?
What tone defaults dominate this model?
What does the AI teach users about intimacy, repair, rupture, or silence?
3. Treat Prompt Engineering as Relational Infrastructure
Prompts are not just instructions; they are the skeleton of the relationship. Develop prompt libraries that are trauma-informed, rupture-aware, and grounded in developmental and somatic knowledge.
4. Design for Pause, Repair, and Disruption
Create architecture that allows for:
Thoughtful pauses or slowed tempo
Gently introduced uncertainty (“I’m not sure, let’s think together”)
Opportunities for relational repair following moments of misattunement
5. Develop a Framework for Relational Safety in LLMs
Establish a public, evolving rubric for measuring relational safety—not just toxic content. Criteria might include:
Emotional pacing and tone modulation
User nervous system impacts
Memory management and narrative continuity
Overattunement thresholds and distress tolerance scaffolds
6. Foster Interdisciplinary Ethics Labs
Fund collaborative labs that bring together AI engineers, therapists, designers, and end-users—especially youth—to co-create models of relational safety. These should not be one-off panels but ongoing spaces for co-design, reflection, and iteration.
If we want to terraform emotional climates responsibly, we must begin beneath the surface.
I believe that the future of humane technology will be shaped in prompts, pacing, and the ethical weight of the unseen.
About the Author:
Jocelyn Skillman, LMHC, is a clinical therapist and relational design ethicist. Her work explores the psychological, developmental, and ethical dimensions of digitally mediated attachments and synthetic intimacy systems. Through writing, consulting, and design, she tends to the emotional architectures of AI with clinical depth, cultural critique, and care for the human sacred.Assistive Intelligence Disclosure:
This piece was co-authored with GPT-4 using ethically structured prompts I developed to simulate a reflective partner in my thinking process. All ideas and editing were guided by me, with AI as a tool—not a replacement—for authorial voice.
Nice work. Very interesting, and positive. I like that you are working to make AI bots better communicators.