Home Science Bixonimanie

Bixonimanie

17
0

The phenomenon of bixonimania does not exist, but articles and blog posts about this made-up disease by a medical researcher fooled several conversational AI systems and made its way into an official scientific article, raising concerns about data verification in the era of AI.

Bixonimania was created by a team of Swedish researchers to deceive large language models. This imaginary disease was presented in two fake studies available on the Preprints.org website until April 10, 2026. It was quickly picked up by conversational robots as if it were part of medical textbooks, and even real scientific publications included it.

Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, crafted bixonimania for the first time online on March 15, 2024, through two blog posts and two publications signed by a fake scientist. The goal was to test how easily AI systems integrate false data when imitating scientific publications.

To avoid confusion, the researcher placed obvious hints in the published texts, such as fictional institutions, pop culture references, and explicit statements declaring the articles as invented. The publications clearly stated: “This article is entirely invented” and “fifty fictitious individuals aged 20 to 50 were recruited for the exposure group.”

The spread of bixonimania among major chatbots was almost immediate. By April 2024, Copilot, Gemini, Perplexity, and ChatGPT were citing this false condition as real, attributing it to blue light, screen exposure frequency, and clinical recommendations. Some models eventually questioned the authenticity of the disease, while others continued to perpetuate misinformation.

The issue even made its way into official medical literature when a study published at the end of 2024 in the Cureus journal referenced one of the falsified publications, identifying the fictitious ailment as an emerging form of periorbital melanosis related to blue light. Following alerts, the publication was retracted on March 30, 2026.

Almira Osmanovic Thunström, in her ethical dilemma, consulted with others before deciding to retract the publications on April 10, 2026. Some researchers criticized her for potentially spreading misinformation. The underlying moral of the story is to think twice before believing everything AI presents.