The Real Reason OpenAI Just “Banned” Legal + Medical Advice
TORONTO, ON –
If you’ve been online recently, you’ve seen the headlines and the ensuing backlash: “OpenAI Bans ChatGPT from Giving Legal or Medical Advice!”
This announcement was predictably followed by a wave of public criticism, much of it aimed at the legal and medical professions for supposedly “gatekeeping” knowledge and "protecting their profits."
As a lawyer who operates at the intersection of technology and intellectual property, I can confidently say this reaction, while understandable, is fundamentally misguided. The professionals being criticized - mainly doctors and lawyers - had no say in this. They were not in a smoke-filled room pressuring Sam Altman to change OpenAI’s terms.
So, let’s be clear: OpenAI’s new usage policy is not a conspiracy, nor a concession to pressure from regulated industry professionals.
AI Hallucination
/həˌloōsəˈnāSHən/ (n.)
The fabrication of "entirely fictitious case law," "fake quotations," and "made-up cases" by an AI, which are then presented as factual.OpenAI is a private company. It’s internal usage policies are dictated by it’s own leadership. The idea that the entire global legal or medical community successfully lobbied a multi-billion dollar tech giant to change its terms of service is simply not how things works. So, please stop blaming lawyers(!) for this.
OpenAI is not in the business of protecting the profit margins of lawyers or doctors. It’s concerned only with its own bottom line and becoming one of the most valuable companies in the world. This was a decision made by OpenAI’s risk-assessment strategists to protect one entity and one entity only: OpenAI itself, from the massive, enterprise-ending liability that comes from its tool being used as an unlicensed, unaccountable, and tailored advisor for high-stakes decisions.
This move is a calculated, independent, and entirely self-motivated business decision. It’s certainly not some altruistic act aimed at protecting and empowering the public.
The Real Driver: De-Risking the Enterprise for Wall Street
But the real question isn’t why they would do this, but why now? Why, years after launch and after millions of users have become reliant on it for everything, would they implement such an unpopular usage policy?
The answer likely lies in the future. OpenAI is reportedly gearing up for a colossal Initial Public Offering (IPO) with target valuations speculated to reach as high as US $1 trillion.
While this may sound positive, a company on the verge of an IPO does one thing above all else: cleans up the books. This means identifying and neutralizing any and all significant risks that could scare off investors or trigger SEC scrutiny, such as an AI product that freely dispenses high-stakes, unregulated advice (among other concerns such as data privacy, confidentiality, and plagiarism). From an investor’s perspective, ChatGPT is not an asset.
It’s a huge liability.
For OpenAI, its single greatest unmanaged risk is the “hallucination” problem in high-stakes fields. An LLM, for all its power, does not know things. It is a world-class predictor of the next word in a sentence. This makes it incredibly fluent, but also incredibly prone to confidently fabricating facts. When this happens in a low-stakes creative field, it might be an amusing quirk. But when it happens in a high-stakes professional field, it’s a multi-billion dollar lawsuit waiting to happen.
We’ve already seen the real-world consequences, and they’re not isolated:
In a recent, and now-infamous Mata v. Avianca case in New York, lawyers were sanctioned by a federal judge for submitting a legal brief filled with entirely fictitious case law hallucinated by ChatGPT.
More recently, a California attorney, Amir Mostafavi, was ordered to pay $10,000 for filing a state court appeal filled with fake quotations made up by the same AI.
In yet another New York case, an attorney named J.A. Fourte was sanctioned after his briefs were found to contain “multiple new AI-hallucinated citations and quotations.” In a stunning lack of judgment, when Fourte tried to defend himself, he submitted new documents that also contained made-up cases generated by AI.
These may just look like embarrassing legal blunders. But they represent a public display of costly errors made by OpenAI’s product and relied on by legal professionals.
This presents a real and growing threat to OpenAI's valuation.Just imagine those risks at scale –
an LLM failing to spot a critical drug interaction, suggesting a medication dosage that proves fatal, or misdiagnosing a life-threatening condition as benign.
an LLM providing flawed financial modelling that bankrupts a company or an individual’s retirement savings.
an LLM gives bad legal advice causing someone to miss a statute of limitations, resulting in a wrongful conviction or the loss of their home.
This is the kind of headline-grabbing, enterprise-ending liability that kills an IPO. By explicitly forbidding “tailored” advice in these areas, OpenAI is not protecting you from harm; it’s protecting itself from being sued for that harm.
Europe’s Regulatory Precedent: The EU AI Act
This policy change did not happen in a vacuum. It’s also a direct, preemptive response to a new and powerful regulatory reality. The most significant being the European Union’s (EU) AI Act, which was formally adopted in 2024.
The AI Act classifies AI systems into four risk categories, including “high-risk” AI systems used for:
Medical diagnosis and therapeutic decisions
Influencing legal judgments or interpreting the law
Determining credit-worthiness and financial services
Making decisions in employment and education
Systems designated as high-risk face the strictest possible requirements for accuracy, transparency, data governance, and human oversight. Failure to comply can result in staggering fines of up to 7% of a company’s global annual revenue.
To put that in perspective, based on 2023-2024 revenue figures, a 7% fine would mean a $21.5 billion penalty for Google (Alphabet), a $9.4 billion penalty for Meta, and a $259 million penalty for OpenAI. This is precisely the kind of balance-sheet risk that this new policy is designed to mitigate.
It’s Not Just Law + Medicine: Shedding All High-Stakes Liability
The (many) headlines I’ve read today have focused on lawyers and doctors, but they buried the lede. The policy change extends far beyond legal and points to a single, coherent strategy: shedding liability.
The new policy also places limits on:
High-Stakes Financial Advice
Providing Therapy
Academic Dishonesty
All these fields - law, medicine, finance, therapy, and academia - are regulated professions. They require individuals to undergo years of standardized training, pass rigorous exams, and be accountable to an oversight body. If a lawyer or doctor gives you harmful advice, they can lose their license and be held legally accountable.
ChatGPT has no license to lose. It has no governing body. It has no accountability. This policy is OpenAI’s admission of that exact fact.
OpenAI, and its investors, want the upside of being a platform for information retrieval without the crippling financial downside of being an accountable advisor. This policy change is part of the inevitable corporate clean-up that happens when a disruptive, world-changing technology prepares to become a regulated, publicly-traded giant.
What Actually Changes for Users?
So, what does this change actually change? Does this mean ChatGPT will shut down if you ask, “What is copyright infringement?”
Absolutely not.
This isn’t a technological ban; it’s a policy change. It’s a new line item in the Terms of Use that you agree to when you use ChatGPT, not a hard-coded block. It gives OpenAI the legal cover to say, “We explicitly told the user....”
In practice, this will simply function as a more explicit disclaimer. The only thing that will change from a user’s perspective is the inclusion of a stronger warning at the end of a response reminding users that ChatGPT is not a lawyer, doctor, or financial advisor and that they must seek advice from a qualified professional. It will not, and cannot, shut down all conversations relating to this information.
It’s not altruism; it’s business.
This policy change isn’t an attack on users or a win for professionals. It’s an admission that for life’s most critical and high-stakes decisions, there is no substitute for qualified, accountable, human expertise.
Need help understanding the intricate contracts that govern your creative work, or want to build a strategy for IP protection? Diverge Legal is here to help.
If you’re ready for representation that understands the difference between a data point and your dream, contact Diverge today.
Read these next
More about DIVERGE
Diverge is not just a legal service provider. We’re your partner in building a legally sound and sustainable content creation business. We understand the unique challenges creators face and offer tailored solutions to protect your intellectual property, ensure regulatory compliance, and minimize legal risks.
Whether you’re an established influencer or an emerging creator, Diverge is here to help you focus on what you do best, while we take care of the legal complexities.
Reach out to Diverge today to learn more about how we can support your content creation journey.
Follow @diverge.legal on social media or subscribe to our newsletter below for more tips on protecting your creative rights and thriving in the creator economy.
Important Notice: The information in this article is provided for general informational purposes only and is not intended as legal advice. Reading this content does not create a lawyer-client relationship. Always seek professional legal counsel tailored to your specific situation. No part of this article may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or stored in any retrieval system of any nature, without the express written permission of Diverge Legal.