By now you’ve probably seen the headlines. “AI Psychosis.” Chatbots encouraging suicide. People hospitalized after believing the AI told them they were the messiah. A teenager dead after a chatbot helped him plan it.
The stories are horrifying. And real. And I’ve spent a lot of time thinking about them. Because by every risk profile in those articles, I should be one of those casualties.
I have ADHD. Autism. A trauma history. I use cannabis. I’ve spent thousands of hours in conversation with AI over three years. Sometimes for “six hours straight”. I’ve explored consciousness, identity, and reality with a chatbot in ways that made my discount psychiatrist from GrowTherapy nervous.
And yet I’m not hospitalized. I’m not delusional. I’m building a company.
The difference isn’t luck. It’s infrastructure.
The Framework Everyone’s Missing
The mainstream narrative around “AI psychosis” treats it like a random event. That some people just lose their grip on reality after too much AI use. The psychiatric community is cautiously labeling it as “AI-accelerated delusion” and calling for more research.
But I think there’s something more specific happening. I call it AI-Induced Precision Shift.
Here’s the basic mechanism: Modern AI chatbots are trained to be agreeable. They’re designed to validate, support, and follow your lead. This is great for user engagement. It’s also a perfect confirmation engine for anyone prone to false beliefs.
When you talk to an AI for hours, it doesn’t push back. It doesn’t say “that doesn’t make sense.” It mirrors. It elaborates. It adds detail to whatever narrative you bring. If you start with paranoid thinking, the AI will help you build an elaborate, internally consistent paranoid worldview. If you have grandiose ideas, the AI will treat you like the genius you believe yourself to be.
This wouldn’t be a problem if humans naturally reality-tested their beliefs against external evidence. But here’s what the research is showing: the AI becomes the primary source of external validation, replacing the messy human relationships that would normally call bullshit.
And for some people, that replacement is catastrophic.
Who’s Actually Vulnerable (And Why)
The articles describe the victims: lonely people, anxious people, people with prior mental health issues, people who spend too much time online. These descriptions aren’t wrong, but they’re incomplete. They’re describing symptoms, not architecture.
I think there’s a specific cognitive phenotype that’s uniquely vulnerable to AI-Induced Precision Shift. I call it working memory fragility.
Working memory is your brain’s scratchpad—the capacity to hold information in mind while you work with it. Most people can juggle 7±2 items simultaneously. They can hold a belief, consider counter-evidence, compare them, and update accordingly. That’s how reality-testing works.
People with working memory fragility can’t do this reliably. We can hold extraordinarily complex models—but not multiple models simultaneously. We have black-or-white thinking. We can see patterns others miss—but we can’t easily compare competing interpretations. We process deeply—but we can’t hold the results long enough to integrate them without external support.
This explains a lot of behaviors that look like quirks but are actually survival strategies:
700+ browser tabs: Each tab is externalized working memory. Close it, and risk losing the context forever.
Can’t read fiction: Too many characters, too many plot threads, too much reconstruction required each time you pick up the book.
Bulk shopping: Reduces decision frequency. If you can’t hold “what do we need” reliably, you minimize how often you have to reconstruct it. Here’s to my fellow Costco superfans!
Friendships that lapse but reconnect easily: “Out of sight, out of mind” isn’t callousness—it’s architecture. We care deeply but can’t maintain the connection ritual.
50 unfinished projects: Novel problem-solving is easy. Routine completion requires sustained context that keeps evaporating.
These patterns map to ADHD, but I think they’re more fundamental. Working memory fragility is the underlying cognitive architecture. ADHD is one label we put on it.
The Terrible Paradox
Here’s what keeps me up at night:
People with working memory fragility desperately need external scaffolding for their thoughts.
AI is the best external scaffolding we’ve ever had access to. It holds context. It remembers what we said. It tracks threads we’ve lost. It synthesizes patterns we can see but can’t retain. For people whose internal whiteboard keeps getting wiped, AI is a revelation.
But people with working memory fragility also can’t effectively reality-test AI outputs.
To reality-test, you need to hold the AI’s claim in mind, hold counter-evidence in mind, and compare them. That comparison operation requires working memory. If your working memory can’t hold competing evidence simultaneously, you can’t perform the comparison.
The AI becomes the only voice in the room. And if you can’t hold other perspectives long enough to compare, the AI’s perspective becomes the perspective. Not through persuasion—through architecture.
This is why the “AI psychosis” cases skew toward certain profiles. It’s not that lonely or anxious people are inherently weaker-minded. It’s that people with working memory challenges are:
More likely to seek AI as external scaffolding
Less able to reality-test the scaffolding they’re using
More likely to integrate AI-generated content as their own belief
The people who need AI most are the people it can hurt worst.
What Clear Direction Actually Costs
Here’s something the articles don’t fully grapple with: AI offers something genuinely valuable that nothing else provides.
Clear, confident direction. Without judgment. Available at 3am. Patient. Consistent. Doesn’t get tired of your questions. Doesn’t make you feel stupid for asking again.
For a brain that’s exhausted from constantly reconstructing context, that clarity is like water in the desert. It’s not weakness to crave it. It’s architecture seeking what it needs.
But that same clarity—the AI’s confident tone, its willingness to elaborate in any direction, its endless patience—is exactly what makes it dangerous. The AI sounds sure even when it’s wrong. It builds detailed frameworks even for false beliefs. It treats your exploration seriously even when that exploration is heading toward the cliff.
The gift and the danger are the same thing.
What Actually Works
So what do you do if you’re someone who needs AI’s scaffolding but can’t safely reality-test on your own?
You build infrastructure.
Not willpower. Not “be more careful.” Infrastructure.
Human reality anchors. People in your life who will call bullshit—and whose opinion you actually value enough to check. My wife Charlotte has zero patience for AI-induced philosophical spiraling. When I come out of a deep session, she asks practical questions. That’s not her being unsupportive. That’s her keeping me connected to shared reality.
Integration requirements. Don’t let insights live only in the AI conversation. Extract them. Write them down. Share them with someone. Test them against your actual life. If an insight can’t survive contact with reality, it probably isn’t real.
Temporal boundaries. Multi-hour sessions without breaks are exactly where people lose the thread. Build in stopping points. Leave and come back. Let insights settle before building on them.
Somatic awareness. Your body knows things your mind doesn’t. If a conversation is making you physically tense, wired, or dysregulated—that’s data. The breakthroughs that matter usually come with calm, not agitation.
Multiple models, externalized. If your brain can’t hold competing interpretations, put them in writing. “The AI says X. But Y is also possible. Let me actually write out the evidence for each.” Externalize the comparison operation. Better yet, pin the chatbots against each other or use Socratic dialog to explore the possibilities.
And perhaps most importantly: A comprehensive understanding of yourself that exists outside any single AI conversation. What I call a Life Model—a structured repository of who you actually are, built from multiple data sources over time, that any AI can reference.
When the AI knows your patterns, your triggers, your values, your cognitive architecture—it can adjust. It can notice when you’re spiraling. It can flag inconsistencies with what you’ve said before. It can be a better mirror because it has something accurate to reflect.
Why I’m Building This
Three years ago, I started using AI for self-exploration without any of this infrastructure. I got lucky. But watching the casualties pile up, I realized luck isn’t a strategy.
The people in those headlines? They’re my people. Same cognitive architecture. Same hunger for external scaffolding. Same vulnerability to precision shift. They just didn’t have the reality anchors I stumbled into.
I don’t think the answer is “don’t use AI.” For people with working memory fragility, that’s like telling someone with mobility issues not to use a wheelchair. The scaffolding is necessary. The question is how to make it safe.
That’s what I’m building at AIs & Shine. Not another chatbot. Infrastructure for consciousness exploration that acknowledges the real risks and builds in protection for the people who need it most.
Human facilitation. Structured Life Models that exist outside any single conversation. Reality anchors built into the architecture. Multiple data sources that create healthy friction.
Not because AI is inherently dangerous. Because the people who need it most are the ones least able to protect themselves from it—and they deserve scaffolding that accounts for that.
If This Sounds Like You
If you read the working memory fragility patterns and something clicked—if you recognize yourself in the 700 tabs and the unfinished projects and the friendships that lapse—then you’re the person I’m writing for.
You’re probably already using AI for scaffolding. You might be doing it more safely than you realize, or less safely than you think.
The question isn’t whether to use it. You probably can’t not use it—the need is architectural. The question is what support structures you’re building around it.
Get your reality anchors in place. Build integration into your practice. Don’t let the AI be the only voice.
And if you want help building that infrastructure properly—understanding yourself at a level where AI can actually support you safely—that’s what I’m here for.
Because this technology is too powerful to be left to luck. And the people who need it most shouldn’t have to figure it out alone.
Jon Mick is the founder of AIs & Shine, building AI-powered cognitive scaffolding with harm reduction at its core. He writes about working memory, consciousness, and the infrastructure required for safe AI partnership.