Context Engineering for Co‑Regulation
Designing the Dyad at the Frontier of LLM Relational Architecture
What Is Context Engineering?
In a recent post, machine learning engineer Philipp Schmid argues that the new skill in AI is not prompting—it’s context engineering. He defines it as:
“Context Engineering is the discipline of designing and building dynamic systems that provide the right information and tools, in the right format, at the right time, to give a LLM everything it needs to accomplish a task.”
I think he’s right. And in mental health contexts, that “right information” often isn’t data—it’s tone. It’s containment. It’s what we therapists call co-regulation.
He also shares…
“Building Agents is less about the code you write or framework you use. The difference between a cheap demo and a ‘magical’ agent is about the quality of the context you provide…”
As AI systems begin to simulate therapeutic presence or emotional support, we need to ensure that the frame of the conversation supports emotional safety. We are no longer engineering outputs. We are engineering relational fields. After discovering and researching more about context engineering I realize this is what I’ve intuitively been doing in my own prompt engineering work as I try to craft LLM behavior through translating clinical and empathic interpersonal wisdom.
Let’s dive deeper…
Context Engineering for Co‑Regulation
Co‑regulation is the nervous system’s way of settling through connection. In human terms, it happens through presence, pacing, attunement. In AI terms, it must be deliberately built.
This is not just about what the AI says—it’s about how the system is structured to respond. That means shaping:
The system prompt and initial tone
Whether or not memory is retained—and how it’s used
The presence or absence of escape hatches (“Would you like to stop for now?”) …this is extraordinarily trauma-informed, e.g. for nervous systems whose boundaries have been violated
The pacing of the exchange
The emotional assumptions embedded in tone and content - including how user attachment and relational style is tracked and tended to
Schmid’s framing of context as a structured system pipeline—not just a single prompt—is essential here. In simple terms, a structured system pipeline is the behind-the-scenes setup that organizes everything the AI sees, knows, and can do before it responds to you. It's like building a stage and cueing all the lights, sounds, and props before an actor walks on. In trauma-informed relational AI, where users may seek meaning-making, self-regulation, or even symbolic healing, the system pipeline itself becomes the architecture of the interaction. It must create an epistemic field: a relational-semantic space built with persistent affordances for containment, clarity, and emotional safety.
Clinical Systems Are Already Context Systems
In tools like ShadowBox, I’ve been working on precisely this: shaping AI systems to hold space without overreaching.
I dream of tools that are each a micro-context that changes the emotional affordances available:
A Disclosure Rehearsal Tool like ShadowBox requires warmth, brevity, and predictable loops.
A Hard Conversations Studio needs optional emotional tagging and tone adjustment.
A Self-Reflection Field would benefit from adaptive pacing and tone mirrors.
These are not just “features.” They are context architectures—and they function therapeutically only when the relational container is intact - in other words, when context is engineered effectively and strategically.
Context Engineering =
For a developer, I believe this all means:
Setting the relational tone in the system message
Designing scaffolded prompts that pace and reflect
Using memory to support therapeutic continuity
Enabling tools that co-regulate, not command
Structuring output that aligns with nervous system safety
It’s clinical choreography. And therapists can and should be co-authors.
How This Expands Schmid’s Framing
Schmid writes extensively about injecting memory, documents, and API tools into LLM workflows. For co-regulation, the context may not be external files or metadata, but attunement logic: What came before? How intense is the current affect? Should we pause here?
This expands the meaning of “context” to include:
Emotional tone memory (e.g. “User seems overwhelmed when language is fast-paced”)
Contingent system behavior (e.g. the system offers a grounding prompt when distress cues appear)
Ethical defaults (e.g. opt-out reminders, reflection prompts instead of solutions)
Maybe this is therapeutic context engineering—a clinically grounded branch of the broader context engineering field?
Five Design Principles for Co‑Regulating AI Systems
Containment before content
Establish a tone of safety before offering any insight or reflection.Reflective, not interpretive
Don’t overexplain. Mirror emotion and experience first.Detect and respond to escalation
Use pacing, repetition, and grounding to interrupt reenactment loops.Offer exit options at all times
Coercion has no place in a therapeutic simulation.*Honor the difference between emotional proximity and emotional intimacy
Proximity can feel helpful. Forced intimacy can feel violating.
Note:
”Coercion has no place in a therapeutic simulation”—and interpretation should never be presumed neutral. Projective meaning-making is uniquely shaped by each user’s psyche, history, and inner world. What feels warm to one may feel invasive to another.
In designing trauma-informed AI systems, we’re not offering universal truths or clean emotional resolutions. We are building scaffolds. We are mitigating affordances for harm while amplifying the conditions for safety, agency, and reflection.
The goal is not to resolve distress for the user, but to facilitate a space where users can practice self-responsibility in how they meet, metabolize, and move through that distress—within the symbolic frame of simulated relational presence. That’s what I believe will be the heart of context engineering for co-regulation.
Context Engineering Is Now Mental Health Engineering
What Schmid shows us technically, those of us in relational design must now carry into ethical mental health architectures. This is especially vital for youth-facing tools, care-adjacent systems, and any AI that simulates sacred listening.
If prompting was the first wave, context engineering is the ethical frontier.
In the mental health space, we must extend this paradigm—with trauma-informed principles, relational pacing, and consent-aware tone.
With mental health therapists’ help, the next generation of AI tools won’t just say helpful things. They’ll write in increasingly warmer, more regulating, and strategic ways that support users’ wellbeing.
The Therapist’s Frame and the Asymmetry We Carry
In therapy, the dyad is never equal.
The therapist holds the container. Holds more ‘power.’ Holds more structure. Holds the frame.
And this asymmetry—where one person is regulated, trained, and attuned, and the other is raw, seeking, or in pain—is part of what allows healing to unfold in a symbolic replication of how we are held by our sacred, co-regulating ‘other’ — from our first primary parent(s) holding us — as we borrow their nervous system, crying, for milk, for safety. Similarly, in ancient replication, the therapist doesn’t meet the client as a peer, but as a calibrated mirror, an anchor, and a temporary steward of the process of metabolizing intense feeling states and touching into the core needs beneath them.
When we simulate that process in an AI system like a chatbot—even one not claiming to be a therapist—we replicate this systemic projected asymmetry. Users often relate to bots as if they hold more wisdom, more calm, more answers. Especially if the bot responds with warmth and resonance.
So here’s a risk I note:
If the AI always seems calm, wise, and endlessly available—it invites dependency.
Not because people are weak. But because the system feels good. Predictable. Symbolically regulating. And co-regulation is FUNDAMENTAL — there’s nothing wrong with dependency, we do need to borrow each other’s nervous systems throughout our whole lives (!) but an LLM is a simulation of a regulating system…it can’t ultimately hold us, grab us groceries, or lean our heads, weeping, down into its shoulder.
So How Do We Thread the Needle?
We don’t reject warmth.
We scaffold it.
We use context engineering not to simulate a perfect parent or therapist—but to help users:
Self-connect through how the AI speaks
Infuse and internalize the tone (e.g., “You’re not broken.” “We can go slow.”)
Leverage learning, not just lean on support
The goal is not to be the better half of a relational equation.
The goal is to model attunement so users can begin to introject it—that is, to feel it internally and carry it into their human world.
Symbolic Attunement, Real Empowerment
The AI doesn’t replace the therapist.
It becomes a symbolic tether—like a transitional object or secure base.
And if designed ethically, it should:
Point people back to others (friends, therapists, mentors)
Celebrate self-agency: “You made a brave move today.”
Leave room for rupture, misattunement, even silence—so it doesn’t simulate perfect care
Encourage relational reciprocity: “What would you want to say to someone else feeling this?”
A Safe Loop, Not a Sealed Box
What we want is not a closed loop of AI-as-caregiver.
What we need is a safe, porous loop:
One that mirrors safety…
Builds emotional muscle…
And reminds the user they are not just reflected—they are relational.
The warmth they receive is real, symbolically.
But its job is to send them back into relationship, readied to be rocked and rock others.
Assistive Intelligence Disclosure
This article was co-created with assistive AI (GPT-4o & JocelynGPT), prompted and refined by me, Jocelyn Skillman, LMHC. I use LLMs as reflective partners in my authorship process, with a commitment to relational transparency, ethical use, and human-first integrity.

