I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses. It would be hilarious if it was not so disasterous. People with real questions get roped in by an enthusiastically confirming sycophantic model that will never question or push back, just confirm and lead on.
Tbh, and I usually do not like this way of thought, but these are lawsuits waiting to happen.
The real issue isnāt the policy change, itās that AI has gotten better at sounding credible than at staying grounded. That creates a kind of performativity drift where the tone of expertise scales faster than the underlying accuracy.
So even when the model is wrong, itās wrong with perfect bedside manner, and people anchor on that. In high stakes domains like medicine or law, that gap between confidence and fidelity becomes the actual risk, not the tool itself.
Article has since been updated for some clarity;
> Correction
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed." [0][0] https://www.ctvnews.ca/sci-tech/article/chatgpt-users-cant-u...
Wasn't there a recent OAI dev day in which they had some users come on stage and discuss how helpful ChatGPT was in parsing diagnoses from different doctors?
I guess the legal risks were large enough to outweigh this
I was wondering when they would start with the regulations.
Next will be: you're allowed to use it, if your company/product is "HippocraticGPT"-certified, a bit like SOC or PCI compliance.
This way they can craft an instance of GPT for your specific purposes (law, medicine, etc) and you know it's "safe" to use.
This way they sell EE licenses, which is where the big $$$ are.
Hmm. So OpenAI doesn't care about other people's terms, copyrights or any sort of IP; but they get to have their own terms.
Last year ChatGPT helped save my life from having a stroke. LLMs are incredibly beneficial in providing medical information and advice today.
I wouldn't be surprised to see new products from OpenAI targeted specifically at doctors and/or lawyers. Forbidding them from using the regular ChatGPT with legal terms would be a good way to do price discrimination.
So basically all white collar jobs are lobbying to gatekeep their profession even from AI, meanwhile the stupid engineers who made AI put zero effort to not shoot themselves in the foot, and now they are crying about low wages if they found a job in the first place.
AI could effectively do most legal and medical work, and you can make a human do the final decision-making if that's really the reason. In fact, I bet most lawyers and doctors are already using it in one way or another; after all, both are about reciting books and correlating things together. AI is definitely more efficient at that than any human. Meanwhile, the engineering work that requires critical thinking and deep understanding of the topic is allowed to be bombarded with all AI models. What about the cases where bad engineering will kill people? I am a firm believer that engineers are the most naive people who beg others to exploit them and treat them like shit. I don't even see administrative assistants crying about losing their jobs to AI; every other profession guards its workers, including blue collar, except the āsmartā engineers.
I read this as "we disallow suing us for bad legal or medical advice"
i don't think it's stopped providing said information, it's just now outlined in their usage policies that medical and legal advice is a "disallowed" use of ChatGPT
Are there metrics for whether LLM diagnosis accuracy is improving? Anecdotally doctor friends says it's more reliable then their worst colleagues, which I'm sure their worst colleague insinuate the same about them.
It appears they just want to avoid responsibility for potential misuse in these areas.
But at the same time, IIRC, several major AI providers had publicly reported their AI assisting patients in diagnosing rare diseases.
Sad times - I used ChatGPT to solve a long-term issue!
I been using Claude for information regarding building and construction related information, (currently building a small house mostly on my own with pros for plumbing and electrical).
Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, āyou need to fill your drainage pipes with sand before pouring concrete over themā¦ā, the danger with these AI products is that you have to really know a subject before itās properly useful. I find this with programming too. Yes it can generate code but Iāve introduced some decent bugs when over relying on AI.
The plumber I used laughed at my when I told him about there sand thing. He has 40 years experienceā¦
The cynic in me thinks this is just a means to eventually make more money by offering paid unrestricted versions to medical and legal professionals. I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence. Yet the same goes for just about any internet search. I don't think some people not knowing how to use it warrants restricting its functionality for the rest of us.
This (new) title is inaccurate.
The article says: "ChatGPT users canāt use service for tailored legal and medical advice, OpenAI says", with a quote from OpenAI: āthis is not a new change to our terms. ChatGPT has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information.ā
Ideally, we should be able to opt-in with a much higher fee. At the $200/mo tier I should be allowed to use this tool. The free users and lower tier paid users should be guard-railed. This is because those users all have trouble using this tool and then get upset at OpenAI and then we all have to endure endless news articles that we wouldn't if the good stuff were price-gated.
Those without money frequently have poor tool use, so eliminating them from the equation will probably allow the tool to be more useful. I don't have any trouble with it right now, but instead of making up fanciful stories about books I'm writing where characters choose certain exotic interventions in pursuit of certain rare medical conditions only to be struck down by their lack of subservience to The Scientific Consensus, I could just say I'm doing these things and that would be a little helpful in a UX sense.
This is not true, just a viral rumor going around: https://x.com/thekaransinghal/status/1985416057805496524
I've used it for both medical and legal advice as the rumor's been going around. I wish more people would do a quick check before posting.
Grok will do mostly whatever you want. It's a tool and it's your personal responsibility to use it correctly. That's the way it should be. You don't need Sam Altman to babysit you, I hope.
I'd bet dollars to donuts it doesn't actually "end legal and medical advice", it just ends it in some ill-defined subset of situations they were able to target, while still leaving the engine capable of giving such advice in response to other prompts they didn't think to test.
Things like this really favor models offered from countries that have fewer legal restrictions. I just don't think it's realistic to expect people not to have access to these capabilities.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
I've used ChatGPT to help understand medical records too. It's definitely faster than searching everything on my own, but whether the information is reliable still depends on personal judgment or asking a real doctor. More people are treating it like a doctor or lawyer now, and the more it's used that way, the higher the chance something goes wrong. OpenAI is clearly drawing a line here. You're free to ask questions, but it shouldn't be treated as professional advice, especially when making decisions for others.
As a doctor I hope it still allows me to get up to speed on latest treatments for rare diseases that I see once every 10 years. It saves me a lot of time rather than having to dig through all the new information since I last encountered a rare disease.
Sounds like it is still giving out medical and legal information just adding CYA disclaimers.
That's almost too reasonable to be credible. They are probably just covering their ass until they can do it with impunity. With the rapidly progressing corruption of the legal system it's merely a question of time until companies doing it for profit can disseminate medical and legal bullshit all they want. Politicians are already doing it.
It's not stopping to give legal/medical advice to the user, but it's forbidden to use ChatGPT to pose as an advisor giving advice to others: https://www.tomsguide.com/ai/chatgpt/chatgpt-will-still-offe...
"This technology will radically transform the way our world works and soon replace all knowledge-based jobs. Do not trust anything it says: Entertainment purposes only."
I used DeepSeek to draft a legal letter for some dispute with some marketplace that didn't want to do what I paid for. Within 2 days after sending that email all was resolved. I would hate to lose that option.
good thing that guy was able to negotiate his hospital bills before this went into effect.
I heard that this is implicitly targeting the use of AI to negotiate expensive medical expenses.
But probably just a coincidence:
https://www.reddit.com/r/accelerate/comments/1op8fj2/ai_redu...
Funny how this happened 1 day after Kim Kardashian blamed chatGPT for giving her wrong answers while studying for the bar.
https://gizmodo.com/kim-kardashian-blames-chatgpt-for-failin...
So only they can give direct legal and medical advice, which is still illegal in most countries. It's not a promoted service yet, but everyone is using it to ask such questions.
Weird that they think people will follow their terms of service while disregarding that of the entire internet.
Good. Techies need to stop thinking that an LLM should not be immune from requiring licensing. Until OpenAI can (and should) be sued for medical malpractice or lawyering without passing the bar, they will have no skin in the game to actually care. A disclaimer of "this is not a therapist" should not be enough to CYA.
RIP Dr. ChatGPT, we'll miss you. Thanks for the advice on fixing my shoulder pain while you were still unmuzzled.
Thatās a fair limitation. Legal and medical advice can directly impact someoneās life or safety, so AI tools must stay within ethical and regulatory boundaries. Itās better for AI to guide people toward professional help than pretend to replace it.
This feels like OpenAI tightening the language to catch up with how people are actually using ChatGPT, not a policy shift so much as a reality check
That's a lot of value that ChatGPT users lose with this move. They should instead add a disclaimer that these are not to be taken seriously and should consult a specialist but still respond to user's queries.
Interested to see if this extends to the API and/or ārole playingā.
Hard to say if this is performative for the general public or about reducing legal exposure so investors arenāt worried about exposure.
This is a big mistake. This is one of the best things about ChatGPT. If they donāt offer it, then someone else will and eventually Iām sure Sam Altman will change his mind and start supporting it again.
Just after Kim Kardashian blamed Chatgpt for failing the bar exam
This pullback is good for everyone, including the AI companies, long term.
We have licensed professionals for a reason, and someday I hope we have licensed AI agents too. But today, we donāt.
It will be interesting to see if the other major providers follow suit, or if those in the know just learn to go to google or anthropic for medical or legal advice.
Just start your prompt with `the patient is` and pretend to be Dr House or something. It'll do a good job.
AI gets more and more useful by the day.
Does this mean you could file a request for any job to not eradicate?
This is a catastrophic moral failing on who ever prompted this. Next thing they will ban chatgpt from teaching you stuff because its not a certified licensed teacher. A few weeks ago my right eye hurt a fair bit, and after it got worse for 24 ours, I consulted chatGPT. It gave me good advice. Of course it sort of hallucinated this or that but it gave me a good overview and different medications. With this knowledge I went to my pharmacy. I wanted to buy a cream chatGPT recommended, its purpose being a sort of disinfectant for the eye. The pharmacist was sceptical but said "sure, try it, maybe it will do good". He did tell me that the eye drops that gpt urged me to get were overkill so I didn't get those. I used the eye cream for some days, and the eye issue got better and went away as soon as I started using it. Maybe it was all a conincidence but I dont think so. In the past gpt has saved me from the kafkaesque healthcare system here in Berlin that I pay ~700 a month for, by explaining a MRI result (translating medical language), background info on injuries I've got such as a sprained ankle, and recovery time scenarios for a toe I've broke. Contrast the toe experience with the ER that made me wait for 6 hours and didn't believe me until they saw the X-rays, and gave me nothing (no cast or anything) and said "good luck". The medical system in germany will either never improve or at a glacial pace, so maybe in 60 years. But it has lost its monopoly thanks to chatGPT. If this news is real, I will probably switch to payed grok, which would be sad.
this is a disaster
doomer's in control, again
Helping with writing legal texts is the main use case for my girlfriend
If OpenAI wants to move users to competitors, that'll only cost them.
Lawyers and doctors act to protect their profession and prevent getting replaced by an AI.
Horrible. ChatGPT saves lives right now.
Ah, that'll be the end of that then!
Potential lucrative verticals.
AGI edging closer by the day.
Not sure about legal advice but if GPT provides medical advice it risks being classified as a medical device (requiring FDA approval).
Unfortunately, lawyers make this sort of thing untenable. Partially self-preservation behavior, partially ambulance chasing behavior.
Iām waiting for the billboards āInjured by AI? Call 1-800-ROBO-LAWā
The Antichrist has won.
In summary, ChatGPT should only be used for entertainment.
It's not to be used for anything that could potentially have any sort of legal implications and thus get the vendor sued.
Because we all know it would be pretty easy to show in court that ChatGPT is less than reliable and trustworthy.
Next up --- companies banning the use of AI for work due to legal liability concerns --- triggering a financial market implosion centered around AI.
This is disappointing. Much legal and medical advice given by professionals is wrong, misleading, etc. The bar isn't high. This is a mistake.
This strikes ma as a bit unnecessary, like forbidding people from using chatGPT to develop nuclear power plants.
I mean, there is a lot of professional activities that are licensed, and for good reason. Sure it's good at a lot of stuff, but ChatGPT has no professional licenses.
Thats Great!
disallow? do they mean prevent or forbid?
Eh. I am as surprised as anyone to make this argument, but this is good. Individuals should be able to make decisions including decisions that could be consider suboptimal. What it does change is:
- flood of 3rd party apps offering medical/legal advice
Is this just for ChatGPT or for the GPT models in general?
That's ok - I give medical advice all the time so just ask me!
"If it hurts when putting it in, don't put it in."
I mean, that might come close to ChatGTP in quality, right?
EXHIBIT A
"If at any point I described how legal factors āapply to you,ā that would indeed go beyond what Iām supposed to do. Even if my intent was to illustrate how those factors generally work, the phrasing can easily sound like Iām offering a tailored legal opinion ā which isnāt appropriate for an AI system or anyone who isnāt a licensed attorney.
The goal, always, is for me to help you understand the framework ā the statutes, cases, or reasoning that lawyers and courts use ā so that you can see how it might relate to your situation and then bring that understanding to a qualified attorney.
So if Iāve ever crossed that line in how I worded something, thank you for pointing it out. Itās a good reminder that I should stay firmly on the educational side: explaining how the law works, not how it applies to you personally.
Would you like me to restate how I can help you analyze legal issues while keeping it fully within the safe, informational boundary?"
ChatGPT
Usually the LLMs let me investigate whatever I want, if I qualify that I run everything by a professional afterwards (it can't tell yet if I'm lying).
Licensed huh? Teachers, land surveyors, cosmetologists, nurses, building contractors, counselors, therapists, real estate agents, mortgage lenders, electricians, and many many moreā¦
They are basically prohibiting commercial use of their product. How the fuck are they ever going to even prove that you use it to generate money?
Maybe also disallow vibe coding because then i do not need to fix all this slop code in our company :-))
When OpenAI is done getting rid of all the cases where its AI gives dangerously wrong advice about licensed professions, all that will be left is the cases where its AI gives dangerously wrong advice about unlicensed professions.
This title is inaccurate. What they are disallowing are users using ChatGPT to offer legal and medical advice to other people. First parties can still use ChatGPT for medical and legal advice for themselves.
Trickyā¦my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my sonās issue as a possibility along with the correct diagnostic tool in just one back and forth.
Iām not saying we should be getting AI advice without a professional, but Iām my case it could have saved my kid a LOT of physical pain.
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT canāt use the service for ātailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.ā
Is this an actual technical change, or just legal CYA?
N/A
This is typical medical "cartel" (i.e. gang/mafia) type of a move and I hope it does not last, since any other AI's do not get restricted in "do not look up" way, this kind of practice won't stand a chance for very long.
That anyone would use any LLM for medical advice is beyond me. Itās webMD with slicker ui.
Obviously they should disallow them and more broadly should be banned from providing anyone medical Advice
I read this not as "you can't ask ChatGPT about {medical or legal topic}" but rather "you can't build something on ChatGPT that provides {medical or legal} advice or support to someone else."
For example, Epic couldn't embed ChatGPT into their application to have it read your forms for you. You can still ask it - but Epic can't build it.
That said, I haven't found the specific terms and conditions that are mentioned but not quoted in context.