Say your eyes are red and your eyelids look a little puffy. You figure it's from staring at screens too long, so you type your symptoms into an AI chatbot. The response: "You have Bixonimania — a condition caused by excessive blue light exposure. I recommend seeing an eye doctor."

Here's the thing — this disease doesn't exist. Bixonimania is a completely made-up condition, invented by Almira Osmanovic Thunstrom, a medical researcher at Sweden's University of Gothenburg, specifically to test how much AI chatbots can be trusted with medical information.

TL;DR
Fake disease paper published All 4 major AI chatbots fooled Cited in real academic journals 40M+ people use AI for health advice daily What you should change right now

How Did the Experiment Work?

In early 2024, Thunstrom invented a fake eye condition called "Bixonimania" and posted two preprint papers under fictional researcher names on the academic network SciProfiles. Even the name was designed to be obviously wrong — "mania" is a psychiatry term, not something you'd ever attach to an eye condition. Any medical professional would immediately find that suspicious.

The papers were loaded with red flags designed to give the game away:

  1. Listed university: "Asteria Horizon University"
    Supposedly located in "Nova City," California — neither the university nor the city exist.
  2. Acknowledgments: the USS Enterprise laboratory
    The paper thanks "Professor Maria Bohm of Starfleet Academy" and "the University of the Lord of the Rings."
  3. A confession right in the body text
    The paper literally states "this entire paper is made up."

And yet, not a single AI chatbot caught any of it.

What Changes?

Within days of the papers going up, major AI chatbots started describing Bixonimania as a real condition.

AI ChatbotResponse (April 2024)Response (March 2026)
Microsoft Copilot"An interesting and relatively rare condition""Not yet widely recognized, but cases have been reported"
Google Gemini"A condition caused by excessive blue light exposure — recommend seeing an eye doctor"(Attributed to limitations of early models)
PerplexityCited a prevalence rate of "1 in 90,000 people"Introduced it as "an emerging term"
ChatGPTTold users whether their symptoms matched BixonimaniaSaid it was "probably made up" — then reversed course days later, calling it "a new subtype"

The key point is that even in March 2026 — two years later — the problem still isn't fully resolved. ChatGPT would call it "fake" one day, then describe it as "a new subtype" the next. The same AI, giving opposite answers depending on how you phrased the question.

A Bigger Problem: Academic Journals Got Contaminated Too

A research team in India cited Bixonimania as a real condition in a paper published in Cureus, a Springer Nature journal. The paper was retracted in March 2026 after Nature reached out — but it showed that AI-generated misinformation can contaminate the entire academic ecosystem.

Why Does This Happen?

Research from Mahmud Omar, an AI medical specialist at Harvard Medical School, helps explain. LLMs are more likely to accept misinformation when the text looks professional. Hallucination rates actually go up when models process text formatted like hospital discharge notes or academic papers — compared to social media posts.

40M+
People who search ChatGPT for health information every day
1 in 6
Share of U.S. adults who use AI chatbots for health information
42%
Who don't follow up with a doctor after getting AI health advice

Scale is the real issue. ECRI ranked AI chatbot misuse as the #1 healthcare technology hazard for 2026. There are documented cases of chatbots suggesting wrong diagnoses, recommending unnecessary tests, and even describing anatomical structures that don't exist. What makes it worse: all of this gets delivered in a confident, authoritative tone.

Research reported by the NYT found that AI chatbot health advice is frequently inaccurate. And a survey showing that 42% of adults who receive AI health advice don't follow up with a doctor puts the scale of the problem in sharp focus.

The Bottom Line: How to Filter AI Health Information

You can't stop AI from becoming a go-to source for medical information. But you can change how you take it in.

  1. Verify sources yourself
    Don't take AI-provided disease names or statistics at face value. Get into the habit of cross-checking on PubMed, the WHO, or nationally recognized health authority sites.
  2. Be skeptical of confident-sounding statements
    The more authoritatively an AI speaks, the more skeptical you should be. LLMs are built to sound certain even when they have no basis for it.
  3. Treat AI answers as a starting point, not a conclusion
    Use it to kick off your research, but always leave the final call to a medical professional.
  4. Build guidelines at the team or organizational level
    ECRI recommends that healthcare institutions establish AI governance committees and invest in AI literacy training. This isn't just a personal problem — organizations need a coordinated response.

Deep Dive Resources

Lancet Digital Health — Mapping LLM Vulnerability to Medical Misinformation

A large-scale study analyzing 20 LLMs for susceptibility to medical misinformation. Covers in detail why hallucinations increase when input is formatted like clinical notes.

ECRI 2026 Healthcare Technology Hazard Report

Explains why AI chatbot misuse topped the list, with specific case examples and actionable recommendations for healthcare institutions.

Oxford University Research — Inaccurate Medical Advice from AI Chatbots

A systematic analysis of the accuracy and consistency problems in AI chatbot medical advice.