Skip to content

ChatGPT’s citation errors exposed in alarming new research study

What if your research citations were wrong nearly 60% of the time? A shocking study uncovers how ChatGPT’s flaws could mislead scholars—and why experts demand urgent fixes.

This seems like a printer box and there is a paper is on that, there is a text "Stop talking" is...
This seems like a printer box and there is a paper is on that, there is a text "Stop talking" is written on the paper and there is an another paper placed on the table and there is a text " Fucking genius" is written.

ChatGPT’s citation errors exposed in alarming new research study

A recent study has exposed serious issues with ChatGPT’s citation accuracy. Out of 176 references checked, the AI tool got just 77 correct—less than half. The findings raise concerns about relying on generative AI for academic research.

Prof. Dr. Theodor Ickler led the investigation into ChatGPT’s citation errors. His team discovered that 35 references (19.9%) were entirely made up. A further 141 (45.4%) contained mistakes, such as wrong publication dates, incorrect page numbers, or invalid digital object identifiers (DOIs).

Only 43.8% of the citations were fully accurate. The study highlights that the AI fabricates or distorts information more than half the time. Researchers now stress the need for stricter checks when using AI in scholarly work.

The report calls for better prompt design, thorough human verification, and stronger safeguards from journals and institutions. These steps aim to prevent flawed citations from undermining research quality.

The results show that ChatGPT’s citation errors occur frequently. With nearly one in five references invented and many more containing inaccuracies, the risks for researchers are clear. The study underlines the importance of manual checks and stricter controls to maintain academic standards.

Read also:

Latest