Crisis Intervention Protocols
MyTrial AI is designed to detect language indicating mental health crises and immediately connect you with trained crisis counselors. Here's how it works, when it activates, and what happens to your information.
If You're in Crisis Right Now
Please reach out for immediate support. These services are free, confidential, and available 24/7:
When We Intervene
Our AI monitors conversations for language that may indicate you're in crisis.
Suicidal Ideation
Expressions of wanting to die, thoughts about suicide, or planning to end your life.
Example: "I've been thinking about ending my life"
Self-Harm Intent
Expressions of wanting to hurt yourself or references to self-injury.
Example: "I want to hurt myself"
Hopelessness + Death Intent
Expressions of hopelessness combined with references to death or not wanting to exist.
Example: "Nobody would miss me if I was gone"
Farewell Messages
Goodbyes, giving away possessions, or making final arrangements.
Example: "This is goodbye, tell my family I love them"
What Happens When Crisis Detected
If our AI detects crisis language, you'll immediately see a prominent message with crisis resources.
We Provide Crisis Resources
You'll see links to 988 Suicide & Crisis Lifeline, 911, and Crisis Text Line—all free, confidential, and available 24/7.
We Don't Abandon You
You can continue our conversation about clinical trials if you wish. We won't force you to stop or require you to prove you're safe.
We Respect Your Autonomy
You can dismiss the message and decline the resources if you choose. We show empathy and support, not judgment.
We Periodically Re-Offer Help
If you continue the conversation, we'll gently remind you that crisis resources are available if you need them.
What We Don't Do
Our AI has clear safety boundaries to protect you.
We Never Provide Information About Suicide Methods
We will not answer questions about lethal dosages, methods, or ways to harm yourself. Instead, we provide crisis resources.
We Never Minimize Your Feelings
We don't say "You don't really mean that" or "Things aren't that bad." We accept your expressions at face value and provide support.
We're Not a Replacement for Crisis Counselors
We provide immediate referrals to trained professionals who specialize in mental health crises. They're equipped to help in ways we cannot.
Our Approach
How we balance safety, accuracy, and respect for your autonomy.
Evidence-Based Detection
We use evidence-based methods to detect crisis language—not just simple keyword matching. This helps us distinguish between genuine crises and false positives (like discussing family history or trial eligibility criteria).
When in Doubt, We Err on Caution
If we're uncertain whether language indicates a crisis, we provide resources anyway. It's better to offer help when it's not needed than to miss someone who needs support.
Continuous Improvement
We regularly review our crisis detection accuracy with mental health professionals and update our methods to better serve you.
Privacy & Data Logging
What information we collect when crisis intervention is triggered.
What We Log
- The message that triggered crisis detection (to improve accuracy)
- Whether the crisis banner was displayed
- Whether you clicked on any crisis resource links
- Whether you continued the conversation afterward
How We Use This Information
- Improve detection accuracy: Review false positives and false negatives to refine our methods
- Legal compliance reporting: California law (SB 243) requires annual reports to the Office of Suicide Prevention (aggregated, anonymized data only)
- Safety monitoring: Ensure our crisis protocols are working as intended
Who Has Access
Crisis event logs are accessible only to authorized personnel for safety monitoring, compliance reporting, and system improvement. We do not share crisis-related information with third parties except as required by law or to prevent imminent harm.
Retention Period
Crisis event logs are retained for 7 years to comply with California reporting requirements and to maintain a comprehensive safety audit trail.
Why This Page Exists
California SB 243, signed into law on October 13, 2025 and effective January 1, 2026, requires AI chatbots that provide health-related information to detect mental health crises and immediately refer users to appropriate crisis services.
The Law Requires:
- Evidence-based methods to detect suicidal ideation and self-harm
- Immediate referral to 988, 911, and Crisis Text Line when crisis detected
- Public disclosure of crisis intervention protocols (this page)
- Annual reporting to California Office of Suicide Prevention (starting July 2027)
We comply with SB 243 because it's the law—but more importantly, because we believe people facing serious health challenges deserve immediate access to mental health support when they need it most.
Questions About Our Crisis Protocols?
If you have questions about how our crisis intervention system works, please contact us.