SOME of the UK’s leading AI experts have warned the 40 million people using ChatGPT as a doctor that, “for the vulnerable, it could be a curse”. One said that “when an algorithm ‘hallucinates’ in a business report, it’s embarrassing — but when it does it with your health, it’s dangerous”.
A visit to the doctor is rapidly being replaced by the digital assistant as OpenAI said a staggering 40 million people now turn to ChatGPT for daily health guidance.
More than 5% of all global interactions on the platform are now medical.
With over 800 million weekly users, millions are now treating the chatbot as a first port of call for symptom checking, decoding medical jargon, and navigating the administrative nightmare of modern medicine in countries relying on private healthcare.
Seven out of ten health queries occur outside clinic hours, peaking when surgeries are closed and A&E wait times are at their most daunting.
This comes after a warning against using ChatGPT for financial advice.
It could be dangerous
Colette Mason, Author & AI Consultant at London-based Clever Clogs AI, said using medical advice from AI could be dangerous.
She added: “We’ve already watched this horror film with mental health and now we’re queuing up for the sequel. Mental health has seen some wins but also devastating losses: emotional dependency, affirmed delusions and crisis interventions.
“Physical health is heading down exactly the same path and it could be dangerous. OpenAI celebrates 40 million daily users seeking health guidance but forgets what happens when pattern-matching meets real medical emergencies.
“We had a chance to get this right after the mental health wake-up calls. Instead, we’re doing it again, faster, and with your mum’s stroke symptoms instead of your mate’s anxiety spiral. It is a fantastic tool for power users who are doing their own informed research. For the vulnerable, it could be a curse.”
Who’s accountable?
Mitali Deypurkaystha, AI Strategist & Author at Newcastle upon Tyne-based Impact Icon AI, said healthcare advice from AI should be used with caution.
She continued: “AI in healthcare isn’t a crystal ball. It’s a torch. In the right hands, it helps us see sooner and more clearly. In the wrong context, it can blind us.
“We already know AI can outperform human specialists in specific tasks. A University of Southampton study found that an AI model identified hidden issues in chest scans with 74% accuracy, compared with 53% for radiologists.
“Here’s the rub. The strongest outcomes consistently come from doctors and AI together, not either alone. People turning to ChatGPT isn’t automatically reckless. Used cautiously, it can help patients ask better questions. I’ve seen it speed up a friend’s skin condition diagnosis by supporting, not replacing, his GP.
“The danger is when overstretched systems push people to use ChatGPT instead of care. OpenAI isn’t a healthcare provider. ChatGPT is a generalist trained to generate answers. If the wrong answer is generated, who is accountable?”
Distress signal from a broken system
Rohit Parmar-Mistry, Founder at Burton-on-Trent-based Pattrn Data, said technology such as AI should be used to help doctors.
He added: “This isn’t a triumph of innovation, it’s a distress signal from a broken system. We’ve moved from ‘Dr Google’, where you panicked over a list of symptoms, to ‘Dr ChatGPT’, a conversational hallucination engine that sounds authoritative even when it’s dead wrong.
“The fact that 40 million people are turning to a chatbot isn’t progress. It’s a damning indictment of failing healthcare access. People aren’t choosing an algorithm over a doctor because it’s better, they’re choosing it because getting a GP appointment often feels like winning the lottery. I work with these systems daily.
“Large Language Models (LLMs) are designed to predict the next plausible word, not diagnose pathology. They don’t have medical degrees, they have probability scores. When an algorithm ‘hallucinates’ in a business report, it’s embarrassing.
“When it does it with your health, it’s dangerous. We need to stop pretending a chatbot is a cure for chronic underfunding. Technology should support doctors, not replace them.”
Warning sign
Patricia McGirr, Founder at Burnley-based Repossession Rescue Network, said it showed the system was broken.
She continued: “40 million people using ChatGPT for health advice is a warning sign, not a win. People are not turning to a chatbot because it is better than a GP. They are turning to it because the system is closed when they need help.
“Late at night, in pain, anxious, or stuck in admin limbo, AI is filling a gap that healthcare has left behind. That does not make it a doctor. It makes it a stopgap. A language model can explain terms and point people to options, but it cannot see, examine or take responsibility.
“When advice goes wrong, there is no duty of care, no comeback, and no accountability. Using AI to navigate broken access is understandable. Treating it as care is a line we should not quietly cross.”


