Second Opinion: Doctors warn of rising cyberchondria as AI self-diagnosis spreads
x

Second Opinion: Doctors warn of rising cyberchondria as AI self-diagnosis spreads

Medical experts warn that unchecked use of tools like ChatGPT, Gemini, and Google for self-diagnosis is fuelling anxiety and misinformation among patients


Artificial intelligence tools such as ChatGPT, Gemini, and Grok have become the first point of reference for many patients seeking medical information. From googling symptoms to verifying prescriptions, the practice of self-diagnosis through digital platforms has grown rapidly.

While this trend offers quick and cost-free access to information, it has also created new challenges for medical practitioners. Patients now often arrive at hospitals with preconceived ideas about their illnesses and treatments, complicating consultations and trust-building.

In a panel discussion on Second Opinion, doctors from across specialities — Dr Raymond Dominic (oncocritical care, Apollo Pritam Cancer Centre), Dr A Ashok Kumar (interventional cardiology, Rela Hospital), Dr Arvind Santosh (obstetrics and gynecology), and Dr Soma Sundar (orthopedic and spine surgery, Kauvery Hospital) — discussed how AI is reshaping the doctor-patient dynamic.

Ease of access driving over-reliance on AI

Dr Raymond Dominic noted that the widespread use of AI stems from the accessibility of information. “The internet is no longer a luxury — it’s a necessity. People are a lot more informed, which is a good thing, but we need to learn how to nuance it,” he said.

This easy access comes with a downside. Patients tend to interpret general information as medical advice applicable to their specific condition. This leads to anxiety, misinterpretation, and in some cases, delayed treatment.

Dr Arvind Santosh explained that while AI can effectively answer basic health queries, it cannot replace medical judgment. “Artificial intelligence in medicine is developing fast, but users are not trained to ask the right questions,” he said, adding that errors in prompting can produce misleading results.

Also read: India leads digital healthcare revolution with AI and data-driven hospitals

From reassurance to cyberchondria

The habit of constant online searching for health information has given rise to what experts call cyberchondria — excessive anxiety about one’s health due to online self-diagnosis. Dr Dominic warned that this behaviour can escalate minor symptoms into perceived serious illnesses, leading to unnecessary emotional distress.

In some cases, patients abandon prescribed treatment after reading conflicting information online. Others delay consulting doctors, assuming AI-generated results are sufficient for medical decision-making.

Doctors on the panel agreed that AI should be treated as a supplementary tool for gathering information, not as a substitute for professional judgment. They urged patients to verify data through credible medical sources and consult physicians before acting on any AI-generated advice.

Also read: Cancer deaths to rise by 75 pc in 25 years, ageing among factors: Study

Cardiac anxiety and misinformation

Cardiology has seen a notable rise in misinformation-driven anxiety. Dr Ashok Kumar observed that patients frequently misinterpret common symptoms like heartburn or mild chest pain as signs of a heart attack. “A chest pain in a 20-year-old could be due to muscle strain, but online searches often list heart attack first, creating unnecessary panic,” he said.

He pointed out that while access to information encourages health awareness, it also heightens anxiety when context is missing. Online content, often sourced from Western healthcare databases, may not reflect the demographic or lifestyle differences of Indian patients.

Dr Ashok also highlighted that fear induced by online content sometimes deters genuine patients from seeking timely care. In contrast, others without actual symptoms seek unnecessary consultations, burdening healthcare services.

Also read: World Heart Day 2025: 5 lesser-known tips to boost cardiac health

The double-edged sword in orthopaedics and diagnostics

In orthopaedic practice, doctors increasingly encounter patients who self-interpret X-rays, MRI scans, or test results using AI tools. Dr Soma Sundar said that while informed patients make discussions easier, over-interpretation causes confusion and alarm.

He explained that normal age-related findings, such as mild disc prolapse or knee degeneration, are often flagged as serious conditions by AI tools. “If you search for back pain, the first few results may suggest cancer. This creates unnecessary fear even in healthy individuals,” he said.

Such over-diagnosis contributes to anxiety and erodes trust between doctors and patients. Many individuals now seek consultations only to confirm AI-generated conclusions, rather than to receive independent medical advice.

Also read: Enteromix explained: Can Russia’s cancer vaccine really save lives?

The need for digital literacy in healthcare

All panellists agreed that both patients and doctors must learn how to responsibly use AI in healthcare. Dr Arvind Santosh suggested that patients should not use AI tools to self-diagnose but to identify the right specialist to consult or understand basic test results.

He also emphasized that AI outputs trained on foreign healthcare data may not be entirely applicable in Indian contexts, reinforcing the need for local validation. As AI becomes more integrated into clinical workflows, doctors too must learn how to effectively use it for diagnostics, research, and patient engagement.

Dr Dominic said many hospitals now use AI-based systems in electronic medical records, generating discharge summaries and enabling data-driven precision medicine. These developments, he said, mark the future of healthcare, but require caution in application.

Also read: How AI-based eye scans can detect high blood sugar, heart disease

Balancing information access with medical judgment

Doctors agreed that while AI platforms help patients feel more involved in their care, they also risk amplifying mistrust in prescribed treatments. Searching drug names often exposes users to lists of rare side effects, causing psychosomatic symptoms or unnecessary fear.

Dr Ashok Kumar noted that this transparency, though beneficial as a “quality check,” can lead to confusion when patients interpret global medical data without considering local health variations. He stressed that healthcare costs and treatment approaches vary across regions, and AI tools cannot replace contextual clinical decisions.

The panel concluded that responsible AI use in medicine depends on digital literacy, credible sources, and collaboration between doctors and patients. As Dr. Dominic summed up, “Use AI as a tool for information, not as a judge. Be careful where you seek information and always cross-check with qualified professionals.”

(The content above has been transcribed from video using a fine-tuned AI model. To ensure accuracy, quality, and editorial integrity, we employ a Human-In-The-Loop (HITL) process. While AI assists in creating the initial draft, our experienced editorial team carefully reviews, edits, and refines the content before publication. At The Federal, we combine the efficiency of AI with the expertise of human editors to deliver reliable and insightful journalism.)

Next Story