Northwestern University

04/09/2026 | Press release | Distributed by Public on 04/08/2026 19:32

AI outperforms doctors at summarizing complex cancer pathology reports

AI outperforms doctors at summarizing complex cancer pathology reports

Many open-source AI models generated more complete summaries, especially for molecular findings

Media Information

  • Release Date: April 9, 2026

Media Contacts

Ben Schamisso

Journal: JCO Clinical Cancer Informatics

VIEW STUDY PDF
  • Study analyzed real-world lung cancer pathology reports from de-identified patients
  • Six open-source AI models were tested, including systems developed by Meta, Google
  • Author: 'This could help physicians focus more on patient care'

CHICAGO --- AI models can generate more complete summaries of complex cancer pathology reports than physicians, according to a new Northwestern Medicine study that tested six models developed by Meta, Google, DeepSeek and Mistral AI.

The study was published on April 8 in JCO Clinical Cancer Informatics, a journal from the American Society of Clinical Oncology.

The findings offer a potential fix to a growing challenge in oncology. As biomarker testing expands, and patients live longer, pathology reports have become increasingly detailed and longitudinal, often spanning multiple institutions and requiring clinicians to synthesize large volumes of information under significant time pressure.

In this study, several open-source AI models consistently produced summaries that were more comprehensive than physician-written versions, particularly in capturing molecular and genetic findings that are critical for treatment decisions.

"As cancer care becomes increasingly complex, the burden of synthesizing complex reports is growing rapidly," said senior study author Dr. Mohamed Abazeed, chair and professor of radiation oncology at Northwestern University Feinberg School of Medicine. "What we're seeing is that AI can help ensure critical pathological and genomic details are consistently captured - not as a replacement for physicians, but as a tool to augment clinical decision-making."

How the study was conducted

The Northwestern investigators analyzed 94 de-identified pathology reports from lung cancer patients. These reports included detailed text describing:

  • Histopathological findings (microscopic tumor characteristics)
  • Immunohistochemical results (protein expression testing)
  • Molecular and genetic data relevant to treatment

The AI models analyzed the text content of these reports and generated structured summaries.

The AI-generated summaries were compared to clinical summaries previously written by physicians. A panel of oncologists assessed each summary for accuracy, completeness, conciseness and potential clinical risk. Across models, AI-generated summaries were consistently rated as more complete, with the largest differences observed in the inclusion of molecular and genomic findings.

"If AI can reliably synthesize these reports, clinicians can review key findings more efficiently, important genetic details are less likely to be overlooked and documentation becomes more standardized," said study co-author Troy Teo, instructor of radiation oncology at Feinberg. "This could help physicians focus more on patient care."

Llama 3.1 and DeepSeek performed best

The scientists evaluated six open-source language models: Meta's Llama 3.0, 3.1 and 3.2 models, Google's Gemma 9B, Mistral 7.2B and DeepSeek-R1. These are not chatbots like ChatGPT, but systems that researchers can download and run locally. According to the study, the strongest performers were DeepSeek and Llama 3.1.

The Northwestern team is now developing an app using Llama 3.1 to eventually allow physicians to upload pathology reports and receive AI-generated summaries for their review. But the study authors emphasize that before deploying the app, they need more testing and validation studies.

AI as a second-opinion tool

The authors said they envision AI as a support layer that enhances, rather than replaces, clinical expertise. It could help highlight key findings, identify missing information and improve consistency in documentation.

"Patients with complex cancers might benefit the most," said study first author Dr. Yirong Liu, a fifth-year resident in radiation oncology at McGaw Medical Center of Northwestern. "In cases where missing a key pathological finding or an actionable genetic marker could change treatment decisions, ensuring that information is consistently captured is critical."

"Patients are living longer and undergoing repeated biopsies and genetic sequencing," Liu added. "Their reports can span dozens of pages. Even a single missed detail can impact care, and this is where AI may provide meaningful support."

The study is titled "Toward Automating the Summarization of Cancer Pathology Reports Using Large Language Models to Improve Clinical Usability." Troy Teo received funding from the Canadian Institute of Health Research (grant CIHR-472392) and from Amazon Web Services' Social Impact funding.

Multimedia Downloads

Images
OpenClose

Please credit all photos to Northwestern University

Study senior author Dr. Mohamed Abazeed demonstrates a prototype AI tool that summarizes cancer pathology reports, shown here in a radiation oncology setting. The tool, developed at Northwestern Medicine, is not yet in clinical use and is undergoing further testing.
Study authors Drs. Mohamed Abazeed (right), Yirong Liu and Troy Teo (left) demonstrates a prototype AI tool that summarizes cancer pathology reports, shown here in a radiation oncology setting.
Study authors Drs. Mohamed Abazeed (right), Yirong Liu and Troy Teo (left) demonstrates a prototype AI tool that summarizes cancer pathology reports, shown here in a radiation oncology setting.
A prototype AI tool that summarizes cancer pathology reports. The tool, developed at Northwestern Medicine, is not yet in clinical use and is undergoing further testing.
Study authors Drs. Mohamed Abazeed (left), Yirong Liu and Troy Teo (right) test a prototype AI tool that summarizes cancer pathology reports, shown here in a radiation oncology setting.
B-Roll
OpenClose
Northwestern University published this content on April 09, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 09, 2026 at 01:32 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]