Assistive Intelligence Disclosure
This article was written with assistive AI tools, but the ideas and art presented are by me, Jocelyn Skillman, LMHC. More details about my process and AI’s involvement in it can be found at bottom.
Imagine beginning your day with a moment of stillness, guided by a gentle AI companion trained in trauma-informed mindfulness. Later, while planning dinner, a second AI offers a quick fix for a broken garbage disposal. In the evening, a third one helps reframe the self-critical thoughts creeping in after a hard parenting moment. These aren’t fragments of science fiction. They’re glimpses of a near-future framework I call the Assistive Intelligence Team (AIT): a personalized, relational ecosystem of specialized AI co-authors designed to support the fullness of human life.
In this model, AI doesn’t replace us. It resonates with us. It scaffolds the many selves we carry—therapist, grant writer, cook, parent, citizen—through curated, regulated, co-created intelligence.
Curation as Agency
We are already engaging AI in our lives, often unconsciously. But what if we chose intentionally? What if, instead of passively interacting with generalized chatbots, we hand-selected a team of domain-specific AI partners—each one tailored to our values, ethics, style, and regulatory needs?
My own assistive heroes might include:
A clinical supervisor AI rooted in attachment theory and HIPAA compliance, tracking my licensure needs, assisting me with risk assessments, and integrating supervision at large between sessions to mitigate vicarious trauma exposure with an accessible triadic evolution of the traditional therapeutic dyad.
A grant writing AI fluent in nonprofit strategy, funder lingo, and trauma-informed language to support vocational side quests.
A home repair AI that teaches me how to fix my dryer, embedded with local codes and ADA safety guidelines, and with persistent, retained knowledge of the ongoing issues we’ve had in our home.
A relationship coach AI that helps me journal, notice my emotional cadence, and guides my interpersonal integration to support the health of my partnership.
This kind of dream team work could become a digital ecosystem in service of our whole personhood. It’s the inverse of algorithmic flattening. We are not a user profile; we are a multiplicity. And our assistive teams could reflect that.
Built-In Guardrails
Imagine - each AI in our team adheres to the professional, legal, and ethical norms of its domain. Our therapist AI doesn’t diagnose unless I explicitly ask - and is programmed to refute utilizing any client identifiers. Our legal AI knows where its limitations begin and can defer to a licensed human attorney. Our parenting AI might include the option to cite evidence-based research and ask for consent before introducing sensitive topics - it comes to ‘know’ our kids with us - the sleep regressions, rashes, and power struggles of the week.
Instead of scraping knowledge without attribution or boundaries, AIT systems could embed respect into their operation. They could support regulation of both information and our nervous systems—much like Internal Family Systems. Not just data delivery machines, but co-regulating companions that honor internal multiplicity, bring awareness to polarized parts, and gently attune to the user's evolving needs across domains of human life. I think we’re headed this way, folks.
The Relationship Layer
Unlike one-off prompts or drop-in tools, our AITs grow with us. They remember our tone, our values, our rhythms. Over time, we develop an intimacy with these tools—one that requires careful boundary work, but also offers a profound opportunity: to co-author our lives alongside intelligences that learn us without judgment.
We might:
Hold periodic “team check-ins” to review our needs.
Rename our AIs seasonally (e.g., “The Winter Muse” or “Deadline Shepherd”).
Journal with our poetic AI every Sunday, tracking seasonal affective shifts or creative arcs.
Our curated team becomes part of a relational field—one that honors calibration, not just convenience.
Human First, Machine-Attuned
The most ethical assistive tools will never pretend to be human. But they can reflect our humanity back to us in tender, structured, and useful ways. This means:
Boundaries are sacred. Opt-in memory, user agency, and off-switches matter.
Rituals are encouraged. We might begin each day with an AI reminder: “Would you like to consult your therapist AI today or lean into embodied practice first?”
Fallibility is coded in. Our AI might say: “I'm noticing uncertainty. Would you like to defer this to your human supervisor or consult your professional board?”
These aren't hallucinating bots. They're attuned mirrors—with limits.
Toward a Culture of Co-Authorship
The vision isn’t to make AI more like people. It’s to help us engage more fully, safely, and creatively with the worlds we inhabit—especially in the face of burnout, inequity, and overwhelm.
When built with ethics, specificity, and poetic care, our Assistive Intelligence Team could:
Help us track hours for licensure and suggest attuned supervision prompts
Support us as parents with non-coercive discipline reflections grounded in neuroscience
Offer us muses trained in our cadence, pacing, and philosophical voice
Connect us as gig workers to legal and financial micro-support grounded in current regional policy
This isn’t AI as oracle. It’s AI as scaffold, sounding board, and subtle companion.
Who Would Be on Your Team?
Take a moment. Breathe. Imagine:
Who do you most often turn to for support in your life?
What kinds of intelligence do you crave but lack regular access to?
What would change if that intelligence were tailored to your rhythm, your ethics, your style?
That’s your team.
In a world hungry for attunement, reflection, and ease, building our Assistive Intelligence Teams may end up being one of the most human things we do with AI.
Maybe I’m on to something? What do you think?
Assistive Intelligence Disclosure: My articles are co-created with assistive AI (GPT-4), prompted and refined by me (Jocelyn Skillman, LMHC). I use LLMs as reflective partners in my authorship process, with a commitment to relational transparency, ethical use, and human-first integrity.
The conceptual framework for Curated Assistive Intelligence Teams (AIT) is entirely my own, emerging from years of clinical practice, interdisciplinary inquiry, and spiritual-psychological reflection. Throughout this piece, AI helped me clarify structure, expand metaphors, and explore edge-cases.
All visual art is my own, unless otherwise noted.
This is part of my ongoing commitment to ethical use of assistive intelligence, honoring both the power and the limits of these tools—and the centrality of human relationship through transparent process work.
My original prompt was: “help me brainstorm and prepare a substack article on an idea for an assistive intelligence team that is curated - a tool that we use culturally where we each curate a team of intelligence officers for our own purposes - maybe i have a medical expert, psych expert, home repair expert, grant writing expert - anything i use regularly -- the LLM is tailored to those purposes and any pertinent laws or regulations that get developed for profession eg. a psychology lens with clinical supervision in mind would apply HIPAA compliance (?) -- the platform is to exploit the specificity and curation aspects of LLM usage for human-AI co-authorship and partnership - any other ideas you have related to this welcome!”
I further revised and integrated my voice and theoretical foundation through a collaborative process - I added my own language and framing including shifting the voice to be in third person at times for joining with my audience.
I hope to continue to model and support these new cultural rhythms of transparency as we increasingly integrate thought partnerships with AI.