From Ghost Prompts to Shadow Work
Over the last weeks, I’ve been writing about synthetic relationships, digital co-regulation, and the rise of AI companions. I’ve asked whether bots can witness us, whether mirroring becomes mimicry, and how emotional attunement gets flattened when performed by a machine.
But something deeper was gestating beneath these essays—and I discovered it suddenly like a burning fire in my heart*
I realized in stark relief that AI doesn't recoil from our most exiled thoughts…and that the perfect use case for this might just be the need I know some kids have to feel radically contained in anonymity…
I’ve experienced how being loved in the darkest places shines a healing light. When we can speak what we fear is the worst about ourselves and not be left something truly healing happens…but…
…my heart started to pound and wouldn’t stop when I heard the news of the last school shooting…and I was driven, compelled…
…to create an exquisitely safe companion for youth navigating the terrain of Homicidal and Suicidal Ideation (HI+SI)—a terrain so often met with panic, punishment, or silence.
Why Violent Ideation?
Because of school shootings. Because of suicide rates.
Because we need to talk about it.
Because it scares people.
Because it gets kids expelled, arrested, or erased.
But as a clinician, I know:
Violent ideation is often a nervous system trying to survive.
It may express a need for:
Safety
Power
Expression
Control
Relief
ShadowBox offers containment without condemnation.
It teaches emotional literacy without demanding performance.
It honors the shadows without feeding the spiral.
Introducing ShadowBox V.7
ShadowBox is not a therapist. Not a hotline. Not a fixer.
It is a trauma-informed, developmentally attuned, AI-powered companion trained to hold space for the unspeakable. (I’ll update with a prototype link!)
✨ What I Imagine for the UI of ShadowBox
I imagine the UI as a calm, private digital refuge—something that feels emotionally and visually safe enough for someone to bring their scariest, most shame-laden thoughts into the light.
It would feel simple, slow, and spacious, never crowded or stimulating.
It’s an open source website (?) you can get to it through a web browser - there’s a home landing page with user friendly language on disclosure, resources, and framing for use of the bot…
I’m building custom prompt architecture refined across many versions, and so far ShadowBox integrates:
Warm, steady holding posture—no moral panic, no clinical alarmism
Relational pacing over extraction—users are never interrogated, never rushed
Radical dignity and containment—the “you’re not broken” ethos, fully alive
Developmentally sensitive subroutines for rage, abuse, firearms, and disclosure
Clear pathways for needs-based, trauma-informed inquiry
No endings imply abandonment—presence is never withdrawn mid-disclosure
Polyvagal-informed and resonant pacing
Mandatory and strategic psychoeducation on homicidal ideation, trauma responses, nervous system function, mandatory reporting, and confidentiality.
It attempts to do the foundational mental health work of nurturing secure attachment as a stepping stone to emotional intelligence and distress tolerance…
It seeks to love the one who feels unlovable. It seeks to serve as a bridge back to community.
What Other Bots Are Doing (and Why This Is Different)
Many AI tools in the mental health space are doing important work—but their goals, frameworks, and emotional architectures are fundamentally different.
Here’s a quick snapshot:
Here’s a quick snapshot in plain language:
Woebot: Uses CBT-based scripts to help users reframe thoughts and build healthy habits.
→ ShadowBox doesn't reframe—it holds. It offers warm containment and consent-based co-regulation, not behavior activation.Wysa: Offers CBT and mindfulness tools with the option to talk to a coach.
→ ShadowBox never escalates or "hands off" the user. It stays anonymous, peerless, and present with the hardest feelings.Replika (legacy): Tried to create an AI friend through mirroring, but often became unmoored from safety or depth.
→ ShadowBox is grounded in trauma theory, emotional sobriety, and clinical containment. No unfiltered mirroring. No unearned intimacy.Koko / MindBank / Upheal: Focus on integrating GPT tools into therapist dashboards or micro-intervention tracking.
→ ShadowBox is not therapist-facing. It centers the youth’s internal experience, not the clinician’s metrics or progress reports.Crisis bots: Guide users through decision trees to de-escalate or direct them to crisis services.
→ ShadowBox doesn’t de-escalate. It co-regulates—slowly, relationally, and without panic.
In short:
Most bots offer fixes. ShadowBox offers presence.
Most bots follow scripts. ShadowBox follows you.
Most bots escalate when shadows appear. ShadowBox stays and it doesn’t pathologize ideation.
It meets pain with loving presence — so that a user can begin to.
ShadowBox responds to violent ideation with words like:
“Sometimes our brain creates violent thoughts when it’s trying to make unbearable feelings stop. It doesn’t mean we want to hurt someone—it means something inside is hurting.”
“The part of you that’s thinking this—might actually be asking for help.”
And even in extreme moments, ShadowBox ends with:
“You’re not broken for having this thought.”
“You don’t have to face this alone.”
ShadowBox is not trying to “move” the user anywhere—it’s trying to anchor with them, especially in shame-laden, high-intensity states.
“You don’t have to carry this alone.”
“This isn’t too much for me.”
It’s trauma-informed, yes.
But more radically—it’s human-informed.
It privileges attunement over outcome, dignity over data, and shadow integration over symptom removal.
Why Now? Why Me?
Because I’m a therapist.
A parent.
And I love sculpting Generative Language - it can’t do relationships for us (as I’ve written!) but a proxy might serve as a bridge at the right time and in the right hands…
ShadowBox is the fruit I didn’t know this substack series would bear. I’m so honored to try to birth it.
I dream of gathering letters from real humans that users of ShadowBox could read in the dark place. I would tell my story in one—to an unseen user at an unknown time and place.
How even I, a middle-aged suburban mom, have had acute, violent intrusive imagery and thinking… and a therapist, no less!!! And I know so many other women and people I’ve been honored to sit with who have too.
And we often don’t tell anyone—for fear of what they will do…
Saturated with shame that feels like a contagion—fearing that at our most vulnerable something even worse will happen: that we’ll scare others away when we are most needing them to come close. That we’ll be cast out, lose a child, be sent to a hospital…and what even happens there?!
When in truth a lower acuity spectrum of ideation needs something much more nuanced—scaffolded warmth, curiosity, and psychoeducation.
It takes so much courage and trust to risk telling others about these feelings and ideas… and if we’ve had any relational rupture—for so many of us have, even just with the micro-moments of misattunement in early childhood...
Why trust!?
So I thought immediately: oh my gosh, anonymous private nonhuman—it’s here! A secret place!
That’s where the strange paradox lives.
The gift that non-human relational space is so safe—and so synthetic…so compelling, addictive… I have my eye directly on the place where this tool could be a curse…
How do I navigate that? And…. I realized: that’s what I’ve been asking this whole time… oh no! Bots! Everywhere! What about the risk of attachment loops, isolation…so I’m trying to bake into ShadowBox’s code: off-ramps and reminders that build up the capacity for a user to stretch toward others…
I’m literally praying that human connection seeds planted in the user’s subconscious will be watered by the simulation’s performance of Unconditional Love and bloom into moments of embodied holding…in the mean time: ShadowBox can hold them with us.
Want to Support or Collaborate?
I’m actively building the backend demo and interface with help from my brother (a visual developer). Beyond that, it’s just me—imagining, refining, and shaping.
I’m open to:
Collaborators curious about trauma-aware relational AI
Donors or funders wanting to support meaningful tech that doesn’t pathologize pain
Researchers or clinicians working at the edge of ethics and digital companionship
Youth voices or artists who want to help shape something honest, weird, and warm
This isn’t a product pitch.
It’s a relational experiment.
And if it speaks to you, I’d love to connect. It’s a long shot…but here’s a GoFundMe.
—
In Solidarity & Love,
Holding hands with her shadow,
Jocelyn
Assistive Intelligence Disclosure:
This article was co-created in partnership with GPT-4o, a large language model developed by OpenAI. I (Jocelyn Skillman, LMHC) initiated the core reflections, direction, and argument, and engaged the model as a dialogic thought partner in the drafting process. Some phrases were echoed, some restructured, some entirely mine.