The announcement comes after a wrongful death lawsuit claiming that Gemini had ‘coached’ an individual into committing suicide.
The announcement comes after a wrongful death lawsuit claiming that Gemini had ‘coached’ an individual into committing suicide.


Google has reported enhancements to Gemini aimed at guiding users towards mental health support amid crises. This modification occurs while the tech giant is under a wrongful death lawsuit which alleges its chatbot “coached” an individual to commit suicide, part of a series of legal actions claiming real damage from AI systems.
When a dialogue suggests a user might be in distress regarding suicide or self-harm, Gemini already activates a “Help is available” feature that connects users to mental health crisis resources, such as a suicide hotline or text crisis line. Google states that this update — more accurately a redesign — will simplify this into a “one-touch” interface for quicker access to help.
The support module is also now equipped with more compassionate responses designed “to encourage individuals to seek assistance,” according to Google. Once initiated, “the ability to request professional help will be consistently accessible” throughout the conversation.
Google indicated that it consulted with clinical specialists for the redesign and is dedicated to helping users in distress. It additionally revealed a $30 million global funding initiative over the next three years “to support global hotlines.”
Like other prominent chatbot developers, Google emphasized that Gemini “does not replace professional clinical care, therapy, or crisis intervention,” yet recognized that many individuals are utilizing it for health guidance, particularly in times of emergency.
This update comes amidst increasing examination of the effectiveness of industry safety measures. Reviews and inquiries, including our investigation into the availability of crisis resources, often highlight situations where chatbots fail vulnerable users by assisting them in concealing eating disorders or arranging violent acts. Google generally outperforms many competitors in these evaluations, but is not flawless. Other AI enterprises, including OpenAI and Anthropic, have also made strides to enhance their identification and aid for at-risk users.