A new European study has raised concerns that prolonged use of artificial intelligence in medical diagnostics may erode doctors’ skills, warning of overreliance on technology in clinical decision-making.
Published on Wednesday in The Lancet Gastroenterology and Hepatology, the research examined the performance of 19 experienced endoscopists across four Polish endoscopy centres during the Artificial Intelligence in Colonoscopy for Cancer Prevention (ACCEPT) trial. The trial, funded by the European Commission and the Japan Society for the Promotion of Science, introduced AI tools in late 2021 to help detect polyps—abnormal growths that can be benign or cancerous—during colonoscopies.
Researchers compared 1,443 non-AI-assisted colonoscopies conducted between September 2021 and March 2022, splitting them into two groups: those performed three months before the introduction of AI and those conducted three months after. While AI assistance improved polyp detection when used, performance dropped when clinicians worked without it after regular exposure.
Before AI implementation, the adenoma detection rate (ADR)—the proportion of procedures finding at least one precancerous adenoma—stood at around 28%. Three months after AI was introduced, the ADR for non-AI-assisted procedures dropped to 22%. Higher ADRs are associated with lower colorectal cancer risk.
Researchers suggested the decline reflected “the natural human tendency to over-rely” on automated systems, likening it to becoming dependent on GPS navigation and losing the ability to read a traditional map. Co-author Marcin Romańczyk of the Medical University of Silesia referred to this as the “Google Maps effect.”
Omer Ahmad, a consultant gastroenterologist at University College Hospital London, wrote in an accompanying editorial that regular AI use may weaken visual search habits and pattern recognition skills, which are essential for polyp detection. Reduced diagnostic confidence and less skilled manipulation of the colonoscope could also result, he added.
Catherine Menon of the University of Hertfordshire noted that while previous research had identified de-skilling as a theoretical risk, this study offers some of the first real-world data pointing to its occurrence in medical diagnostics.
However, some experts cautioned against drawing definitive conclusions from a single study. Venet Osmani of Queen Mary University of London suggested clinician fatigue from increased workloads could also explain the drop in detection rates. Allan Tucker of Brunel University observed that performance overall improved with AI assistance, noting that automation bias is a risk with any new technology, not just AI.
The findings highlight a growing challenge in modern healthcare: how to integrate advanced tools without diminishing essential human expertise. As Ahmad put it, the debate is not only about monitoring technology but about “navigating the complexities of a new human-AI clinical ecosystem” and ensuring safeguards to preserve critical skills.



















