OpenAI’s Policy on Health: Company Clarifies Claims

OpenAI has officially denied viral claims suggesting that ChatGPT will no longer provide health or medical guidance. The company confirmed that its latest update, announced on October 29, did not change the rules around OpenAI’s policy on health, on offering medical or legal advice. Instead, the update simply reorganised its policies for clarity. Despite this, misinformation spread quickly on social media, sparking confusion and concern among users.

What Changed in OpenAI’s Policy on Health

OpenAI’s Policy on Health and Medical Information

The misunderstanding began when a post from betting platform Kalshi claimed, “ChatGPT will no longer provide health or legal advice.” The post went viral before being deleted. OpenAI’s head of health AI, Karan Singhal, dismissed the rumor on X, saying, “Not true. Model behavior remains unchanged.”

Here’s what the update actually includes:

  • The usage policy continues to restrict tailored medical or legal advice without licensed oversight.
  • ChatGPT can still provide general health information, not professional diagnoses or prescriptions.
  • The new policy unifies three separate documents—covering ChatGPT, API, and overall use—into one consistent framework.

OpenAI emphasized that the goal of the update was to make its terms clearer and easier to understand, not to restrict access to information. The company reaffirmed its stance that ChatGPT should not replace professional medical consultation.

Experts noted that the policy still mirrors OpenAI’s long-standing safety rules, which prohibit AI-generated advice that could affect a user’s wellbeing without human review.

Also Read: SEO Explained: A Step-by-step Guide to Search Engine Optimization

By consolidating its terms, OpenAI aims to ensure users and developers follow a single, transparent set of standards. The clarification reinforces OpenAI’s policy on health, which prioritizes accuracy, accountability, and user safety in medical discussions.

The company also reminded users that while ChatGPT can assist with general knowledge and awareness, it should never be used as a substitute for licensed medical professionals. This update, though widely misunderstood, reflects OpenAI’s continued commitment to responsible and ethical AI use.

More News To Read: YouTube Malware: Tutorial Videos Hiding Traps

WhatsApp Passkey Encryption Added For Secure Backups

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top