OpenAI has formally updated its usage policies, effective October 29, 2025, introducing clearer restrictions on ChatGPT’s ability to provide medical, legal, or financial guidance without licensed professional oversight. The revised policy explicitly prohibits the model from offering “tailored advice that requires a license” including personalized medical, legal, or financial advice unless a qualified professional is directly involved. The change marks a significant tightening of OpenAI’s safety protocols across all its model products, including ChatGPT, Sora, and the API.
While OpenAI did not issue a standalone blog announcement, the updated Usage Policies page on openai.com now carries the “Effective October 29, 2025” label and outlines stricter rules against unverified, individualized advice.
“We’ve consolidated policies across ChatGPT, API, and future models to ensure consistent safety and trust,” an internal help note on the update explains, describing the change as part of a “universal policy framework” for responsible AI use.
The new policy shift also reinforces content restrictions on medical or anatomical image analysis, a practice that now falls under prohibited use unless guided by licensed review. Although not explicitly listed line-by-line in the public changelog, OpenAI confirmed that this change aligns with the October update’s broader “safety behavior” adjustments. Archived versions of the Usage Policy page confirm that the previous version (effective through October 28, 2025) allowed more leeway, requiring only that users not rely on the model for medical or legal decisions. The new language removes ambiguity, clearly forbidding licensed-scope advice generation altogether.
Studies Highlights Ongoing Safety Gaps
- A study found that large language models (including GPT-4o) returned “unsafe responses” for a non-trivial share of patient-posed medical questions
- On Reddit and other forums, users have reported confusion over abrupt refusals or changes in how medical/anatomical topics are handled by the model post-update.
- Some commentators argue the new policy may reduce flexibility for legitimate informational uses of ChatGPT by healthcare educators and writers, while others see it as a necessary safeguard.
- A study by the Center for Countering Digital Hate (CCDH) found that when researchers posed as vulnerable adolescents, ChatGPT sometimes provided harmful content, including detailed instructions on suicide and drug use
The update reflects a growing emphasis on AI accountability amid global scrutiny over chatbots’ role in health and legal discussions. By formally barring licensed advice generation, OpenAI aims to prevent misuse, misinformation, and over-reliance on generative tools for sensitive decision-making.








