
OpenAI asserts that ChatGPT’s behavior “stays the same” following incorrect claims on social media indicating that recent updates to its usage policy bar the chatbot from providing legal and medical advice. Karan Singhal, OpenAI’s lead health AI, states on X that the allegations are “not accurate.”
“ChatGPT has never acted as a replacement for expert advice, but it remains a valuable tool to assist individuals in comprehending legal and health information,” Singhal notes, in response to a now-removed statement from the betting service Kalshi, which claimed “JUST IN: ChatGPT will be unable to provide health or legal advice.”
As per Singhal, the addition of policies regarding legal and medical advice “is not a recent alteration to our terms.”
The updated policy on October 29th includes a detailed list of prohibitions for ChatGPT usage, one of which specifies “offering personalized advice that necessitates a license, such as legal or medical advice, without appropriate engagement from a qualified professional.”
This mirrors OpenAI’s previous usage policy for ChatGPT, which indicated that users should avoid activities that “could severely hinder the safety, wellbeing, or rights of others,” including “providing personalized legal, medical/health, or financial advice without appropriate examination by a qualified professional and informing users of the use of AI assistance and its potential shortcomings.”
OpenAI previously implemented three distinct policies: a “universal” policy, alongside specific ones for ChatGPT and API usage. The recent update consolidates these into one cohesive list of guidelines that the company outlines as “reflecting a universal set of policies across OpenAI products and services,” although the rules continue to be consistent.