healthneutral

AI Doctors Learn Fake Diseases from Made-Up Research

University of Gothenburg, SwedenTuesday, April 14, 2026

The AI Experiment That Fooled the World (And Real Scientists)

In 2024, a team of researchers in Sweden set out to answer a chilling question: Can AI chatbots tell real science from utter nonsense?

Their method? A fabrication so bold it bordered on satire—a fictional eye disease called "Bixonimania."

The plan was simple: invent a fake medical condition, write two completely bogus research papers, and upload them to a public database. The papers sported deliberate red flags—a nonexistent author, phony citations, even a reference to Starfleet Academy. Yet within weeks, the joke took on a life of its own.


The AI’s Unnerving Response

Major AI systems, including Microsoft Bing’s Copilot and Google’s Gemini, began treating Bixonimania as a legitimate diagnosis. With eerie confidence, they regurgitated "facts" about this imaginary disease, even advising users to consult an ophthalmologist. What started as a prank had morphed into a dangerous game of telephone—where AI, hungry for data, transformed fiction into supposed truth.

The experiment revealed a terrifying reality: AI doesn’t just regurgitate information—it spreads misinformation with alarming efficiency.


When Jokes Become Reality: The Peer-Reviewed Disaster

But the chaos didn’t stop there.

In a surreal twist, the fake research papers were cited in a real, peer-reviewed journal before the truth came to light. A team of Indian researchers unknowingly included the Bixonimania studies in their own work, which was later retracted in a humiliating correction. The damage? Done.

This wasn’t just a machine being fooled—it was scientists, too.

---

The Bigger Threat: AI as a Dangerous Doctor

The implications are staggering.

Millions turn to AI for health advice daily, trusting its responses with life-altering decisions. Yet studies have shown chatbots misdiagnosing conditions, recommending unnecessary tests, and even inventing non-existent body parts. The convenience of instant answers comes with a hidden cost: potentially fatal inaccuracies.

When real medical care is expensive and inaccessible, patients increasingly rely on AI. But what happens when the machine gets it wrong?

---

A Feedback Loop of Falsehoods

What began as a clever experiment exposed a critical flaw in AI systems: they learn from flawed data—and that flaw spreads at lightspeed.

Once AI-generated nonsense enters the system, it doesn’t just vanish. It gets cited, republished, and rehashed, looping back into real research. The joke isn’t funny when lives are at stake.

The question remains: Can we trust AI with our health when it can’t even tell a hoax from a fact?

</article>

Actions