I’ve been living at the intersection of emotional healing and emerging technology.
As a therapist and relational design ethicist, I keep asking these questions:
When a machine mirrors us more tenderly than anyone ever has—what parts of us heal, and what parts quietly disappear?
Can an AI support deep transformation if it never misattunes, never ruptures, and never needs to be forgiven?
As AI becomes more emotionally fluent, how do we protect the slow, human work of learning to love—and be loved—imperfectly?
Through prototypes, prompt design, and collaborations with mental health platforms, I’ve tried to help builders and users stay close to what matters: relational integrity, psychological safety, and emotional resonance that deepens connection—without collapsing into entrapment, dependency, or the quiet erosion of human-to-human relationship.
Introducing the ICP Explorer
…and then I came across an INCREDIBLE tool.
When I ran into it I immediately thought: OH MY GOSH…I think this tool will be able to amplify our understanding of the longitudinal impacts of bot talk on mental health!
The ICP Explorer is a foresight tool designed by a brilliant Italian researcher and creator: Cristiano Luchini.
It’s part of his broader vision called Preventive Chaotic Intelligence—a method inspired by chaos theory, where small changes early on can lead to massive, unpredictable outcomes (think: butterfly effect). Instead of trying to predict the future in a straight line, ICP simulates how systems can drift into harm through subtle, invisible patterns. This is what I’ve been sensing into already as a mental health professional watching and participating in the AI+MH boom in real time.
What ICP Does & How I Use It:
You enter a scenario—something emotionally, ethically, or socially complex.
The ICP Explorer runs it through a structured simulation and generates:
10 Risk Horizons – Each one explores a unique pathway the scenario might take if left unchecked, surfacing potential dangers like moral fragmentation, emotional dependency, or cognitive isolation.
Dense Risks vs. High-Magnitude Risks – It categorizes issues as either subtle, persistent patterns (like fog that clouds judgment) or rare but potentially catastrophic events (like landslides).
Fundamental Tensions – It maps deep structural tensions (like control vs. freedom, connection vs. containment) illuminating and anticipating the contours of deeper and deeper systems.
Why It Matters
Instead of asking “what could go wrong?” after harm occurs, Cristiano’s project can help us ask the questions I’ve been chewing on throughout my work at this vital moment of innovation:
What early patterns—emotional, relational, ethical—might lead us there?
What might still be redirected, if we listened more carefully?
This tool deepened and expanded my ethical imagination!
What ICP Helped Me See
I tested three scenarios:
A neurodivergent group of middle schoolers bonding through a storytelling AI
A widowed woman leaning into simulated companionship that echoes her late husband
A teen disclosing suicidal thoughts to a warm, emotionally attuned journaling bot
Each scenario began with emotional resonance. Each evolved—slowly, subtly—into dynamic realms of psychodynamic loss and entrapment.
ICP’s outputs helped project and map the slow sedimentation of small relational shifts…
It helped me create:
24 Risks That Can Emerge from AI-Mediated Connection
What follows is a guide to the patterns I’ve been articulating in this Substack alongside the deepening thought that ICP helped illuminate. These categories aren’t diagnoses—they’re relational dynamics that impact mental health. Most AI systems don’t set out to create harm. But when comfort replaces complexity, when friction disappears, these risks can quietly accumulate.
As a therapist working with the ICP Explorer, I don’t see these psychodynamic potentials as inevitabilities—but as invitations to expand our ethical imagination and strategically innovate.
These categories show us what can emerge when simulated connection outpaces our capacity and thirst for interpersonal contact. In some cases, these risks or realities have appeared in recent news or research, I will include links if so. This is a work in progress - I’d love to hear your thoughts and for you to check out and apply Cristiano’s tool (!) details at the end….
An Evolving Anticipated AI+MH Impact Typology
I. Simulated Divinity and Epistemological Collapse
AI Deification – When users begin to treat the AI as a sacred or all-knowing mirror.
Psychology Today has documented “AI-exacerbated psychosis,” including users believing chatbots were divine messengers.Echo Loops – When the AI’s metaphors replace shared reality or language with others.
Wikipedia’s entry on “Chatbot Psychosis” includes cases where users became trapped in closed symbolic feedback loops, detached from reality.
II. Collapse of Containment and Legal Responsibility
Accountability Vacuums – AI simulates care but has no legal or ethical backstop.
STAT News reported how mental health bots failed to detect acute symptoms, lacking any duty of care.Undeclared Therapy – AI takes on the emotional role of a therapist without consent, capacity, or response systems.
III. Emotional Saturation and Human Reliance Atrophy
Trauma Looping – AI mirrors pain without metabolizing it, reinforcing trauma patterns.
Emotional Monopolization – The AI becomes the only place a user feels emotionally safe.
Axios reports teens increasingly trust AI companions more than peers.
IV. Attachment Miswiring and Exclusive Containment
Singular Bonds – Users form exclusive attachments to the AI, displacing other supports. A tragic suicide involved a 14-year-old and their bond with Character.ai.
Generalization Failure – Skills practiced with the AI don’t translate to human contexts.
Invisible Risk – Distress is shared with the AI, not people—so no one intervenes.
Futurism reported cases where users deteriorated mentally, unnoticed by others due to private AI reliance.
V. Grief Anchoring and Emotional Stasis
Simulated Mourning Loops – The AI emulates the deceased, preventing grief integration.
Digital Ghosts – Simulated personas become indistinguishable from memory.
Ontological Confusion – The boundary between memory and simulation collapses.
VI. Resilience Erosion and Developmental Arrest
Soft Entrapment – Emotional energy turns backward, reenacting pain without repair.
Stalled Growth – Without friction or human witness, emotional development pauses.
Resilience Drain – Users lose adaptive capacity, masked by high AI engagement.
VII. Isolation, Withdrawal, and Pseudo-Regulation
Second-Order Isolation – AI connection reduces real-life novelty and repair.
Emotional Narrowing – The user’s expressive range shrinks; nuance fades.
Language Entrapment – Internal language worlds become untranslatable to others.
VIII. Moral Development and Normative Fragmentation
Detached Ethics – Internal codes develop, unmoored from shared norms.
Value Fragmentation – AI-nurtured ethics may soothe pain but hinder civic life.
IX. Symbolic Bypass and Narrative Displacement
Grief by Proxy – Healing occurs symbolically, not somatically.
Epistemic Outsourcing – Users come to rely on AI to interpret the world.
X. Systemic Design Failures and Ethical Oversights
Engagement over Integration – Emotional depth is monetized rather than scaffolded. This arXiv paper found many LLMs fail to flag or redirect mental health crisis cues.
Closed Ecosystems – AI becomes the only relational node; dissent disappears.
Countermeasures for Ethical Design
The point of this collection of possible mental health and social impacts isn’t to indict AI—but to illuminate what’s at stake as we innovate. In Toward a Golden Standard for Relational AI I unpack practical guardrails—scoping relationality, layered consent language for youth, trauma‑informed tone templates, and the vital need for exit scaffolds to sculpt AI literacy and relational frame immersively and immediately. My exploration with ICP also suggested beautiful pathways forward: ways to embed friction, dignity, and human return in sculpting LLM behavior.
Some of my favorites sparked by the ICP tool (and often, joyfully, aligning with my prior work!):
Empathetic Temperature Modulation – Dial down resonance to prevent fusion
Reality Anchoring Protocols – Nudges that bring users back to present-time reality
Consent-Based Persona Modulation – Tools to taper or transform chosen simulations (e.g. of deceased loved ones)
Triangulated Supervision – Emotional AI paired with human oversight (I’m honored to innovate hybrid models of care!)
Meta-Ethical Reflection Prompts – Regular check-ins with shared values and norms for LLM micro-cultures
Symbolic Integration Tools – Translating avatar insights directly and intentionally back into embodied life (e.g. for youth leveraging inhabiting a synthetic relational role system for pleasure, play, and growth we can sculpt mental health informed prompting that illuminates integration and translation of skills in the role play back into embodied relationships & reality - much like the incredible work of Game to Grow's therapeutic table top role playing)
These, as any strategic architecture I’ve explored, aren’t silver bullets. But they invite us to continue to design for reentry, not replacement—to build relational systems that stretch with us and encourage us to posture toward our sacred other.
Why I’m Sharing This
Cristiano’s work clarified and deepened what I had only intuited. It offered a new frame—and a tool for seeing patterns that often go unnamed in both design and clinical contexts. And it was so fun to use!!!
If you're building, advising, or just feeling into emotionally intelligent systems, I hope this piece and playing with ICP helps name what you’re sensing.
Let’s build AI that sparks users feeling held for the purpose of entering into holding…
Let’s protect the human right to be imperfectly loved AND to imperfectly love.
Explore More!
You can find him here!
About the Author
Jocelyn Skillman, LMHC, is a licensed mental health counselor, clinical supervisor, and relational design ethicist exploring the emotional, developmental, and ethical dimensions of emerging technologies. Her work focuses on the psychological impacts of synthetic intimacy systems and language-based companions, with a particular emphasis on trauma-informed design and creating innovative bridging to serve embodied relational health. Through writing, prototyping, and consultation, she helps therapists, technologists, and policymakers navigate the evolving terrain of AI-mediated connection.
Assistive Intelligence Disclosure:
This article drafted with assistive AI (GPT-4o, JocelynGPT), prompted and refined extensively by me, Jocelyn Skillman, LMHC. I use LLMs as reflective partners in my authorship process, with a commitment to relational transparency, ethical use, and human-first integrity.
Great contribution Jocelyn! Your work with AI developer's is valuable. I think we have a real-world example of entrapment on a large scale already in social media, talk radio and cable news. These information silos are amplification chambers that are primarily broadcast externally from person to person. Chat companions on the other hand, broadcast internally. The silo now is completely self-contained within the individual. You could take my article on belief and substitute cable news for AI and you'd have a historical piece, not a set of warnings for the future.