Jimini Health Releases Technical Blueprint for Safe Patient-Facing AI & Adds DeepMind and Yale Leaders to Advisory Board
White paper presents a first-of-its-kind LLM-native roadmap to ensure AI innovation is clinician-led, interpretable, and safe
NEW YORK, July 8, 2025 /PRNewswire/ -- Jimini Health, a safety-first AI-powered mental health platform that improves access and efficacy for patients, today released a new white paper titled "A Clinical Safety Framework for AI in Mental Health." Therapy and companionship are now the number one use case for generative artificial intelligence, underscoring the urgent need to integrate AI technology into mental healthcare without compromising safety, clinical integrity, or human connection.
White paper presents a LLM-native roadmap to ensure AI innovation is clinician-led, interpretable, and safe.Jimini Health has also added two new advisory board members to bolster its commitment to safety and innovation: Dr. Pushmeet Kohli, PhD, Google DeepMind's Vice President of Science and Strategic Initiatives, and former Microsoft Director of Research; and Dr. Seth Feuerstein, MD, JD, Executive Director and Founder of Yale University's Center for Digital Health and Innovation, cofounder at Vita Health, former Chief Medical and Innovation Officer of Magellan Health, and a national expert on behavioral health policy, suicide prevention, and digital mental health.
More than 50 million Americans experience anxiety, depression, or OCD, but fewer than 250,000 clinicians are available to support them. This gap in care–fewer than one therapist available for every 200 potential patients–underscores the need for new, scalable, and clinically safe solutions.
Developed by a multidisciplinary group of leaders in psychology, AI, biotech, and healthcare, spanning immune-oncology biotech unicorn founders to consumer applications, Jimini Health is the first mental health platform to fully integrate large language models (LLMs) with complex, clinician-led clinical care. At the heart of the system is Sage, a clinically supervised AI mental health assistant agent trained to safely engage with patients between sessions, providing personalized action plans and check-ins, and alleviating administrative burden for clinicians. As a technology company, Jimini Health's commitment to patient safety is high: the company operates its multi-state clinical practice, ensuring that every version of its technology is used by its own clinicians before being used by partners and their patients.
"The overwhelming gap between mental healthcare needs and the availability of providers is not just a clinical issue - it's a moral one," said Dr. Johannes Eichstaedt, Chief Scientist, Jimini Health. "Millions are struggling to access quality care, and purpose-built LLM systems hold real promise, but it is critical that the systems be developed with the same rigorous scientific mindset as that of drug development. Our framework outlines how it is possible to innovate in lockstep with safety, even at scale. By applying the framework outlined in the paper, AI can extend mental healthcare safely and impactfully."
Jimini Health's whitepaper offers four critical recommendations for clinical safety in AI-powered mental health solutions:
- Continuous Clinical Oversight & Steering: Licensed clinicians must steer and oversee AI, ensuring that human judgment remains central to care. AI should support, not replace, the therapeutic relationship.
- Reasoning Must be Explicit & Interpretable: Components of LLM agents must justify actions with a clear, interpretable logic. This fosters trust and allows clinicians to verify the system's reasoning, and developers to improve it.
- Staged, Evaluation-Driven Deployment: New AI features must undergo rigorous testing, including red-teaming and clinician-reviewed pilots, before scaling. This ensures safety and adheres to regulatory lifecycle standards.
- AI Alignment to Clinical Safety Values:AI must be trained on therapist-defined priorities, not merely to "sound helpful." This may mean, with the clinician's steering, safely pushing a patient out of their comfort zone. Always-on classifiers for high-risk cues (e.g., suicidality, psychosis, child or vulnerable adult endangerment) guarantee conservative, risk-aware responses.
This framework cannot be purely theoretical, as consumers are already turning to humanlike recreational chatbots, such as Replika, Character AI, or ChatGPT, for emotional support and companionship. Unfortunately, many of these systems lack safety mechanisms, clinical grounding, or even a basic understanding of how to identify psychological risk. In at least one case, an individual reportedly died by suicide after an extended interaction with an emotionally responsive chatbot. Technologists have a moral and ethical responsibility to integrate safeguards and clinical oversight, or risk contributing to patient harm.
"Whereas our mission is certainly to innovate and help partners and their patients, innovators cannot just follow Silicon Valley's 'move fast and break things' playbook," said Luis Voloch, CEO and co-founder of Jimini Health. "Rather, with people's wellbeing and lives, our goal is to first 'do no harm.' Safety must be in lockstep with innovation, which is why we built Jimini as an LLM-safety native company, rather than retrofitting an existing solution."
The addition of Dr. Pushmeet Kohli, PhD, and Dr. Seth Feuerstein, MD, JD, continues to strengthen Jimini Health's world-class executive and advisory team of psychiatry, psychology, and AI experts, whose experience spans top universities such as Harvard University, Dartmouth College, and Stanford University; biotech and health companies including Immunai, Ribbon, and Guardant Health. Its other cofounders, Sahil Sud & Mark Jacobstein, are repeat entrepreneurs with deep experience in healthcare and biotech.
Backed by an interdisciplinary team and rigorous research, including peer-reviewed studies such as Jimini Health Chief Scientist Dr. Eichstaedt's paper published in the Nature journal Mental Health Research, which explores AI safety in psychological settings and the potential impact of LLMs on the future of behavioral healthcare, Jimini Health continues to expand its capabilities and support for patients. The company is in the process of setting up clinical trials in partnership with U.S. universities.
Jimini Health is available in the U.S., with plans for global expansion. The white paper is available at: https://jiminihealth.com/blog/the-new-hippocratic-code-an-llm-native-safety-framework-for-patient-facing-ai-in-mental-health
About Jimini Health
Jimini Health is setting a new standard in mental healthcare with its unique, clinician-led therapy model enhanced by responsible AI. Founded by psychology, AI, and healthcare experts, Jimini Health combines evidence-based therapeutic methods with continuous, AI-supported engagement to provide a secure, clinically validated mental health experience. With the AI-driven virtual assistant Sage, Jimini Health offers personalized, immersive support that bridges the gap between therapy sessions. Jimini Health integrates Sage directly with clinical organizations, empowering the industry to adopt AI safely and effectively within their existing care models. For more information, visit www.jiminihealth.com.
Contact
media@jiminihealth.com
View original content to download multimedia:https://www.prnewswire.com/news-releases/jimini-health-releases-technical-blueprint-for-safe-patient-facing-ai--adds-deepmind-and-yale-leaders-to-advisory-board-302500354.html
SOURCE Jimini Health