MARCH: Multi-Agent Radiology Clinical Hierarchy for CT Report Generation

Agentic AI
Published: arXiv: 2604.16175v1
Authors

Yi Lin Yihao Ding Yonghui Wu Yifan Peng

Abstract

Automated 3D radiology report generation often suffers from clinical hallucinations and a lack of the iterative verification found in human practice. While recent Vision-Language Models (VLMs) have advanced the field, they typically operate as monolithic "black-box" systems without the collaborative oversight characteristic of clinical workflows. To address these challenges, we propose MARCH (Multi-Agent Radiology Clinical Hierarchy), a multi-agent framework that emulates the professional hierarchy of radiology departments and assigns specialized roles to distinct agents. MARCH utilizes a Resident Agent for initial drafting with multi-scale CT feature extraction, multiple Fellow Agents for retrieval-augmented revision, and an Attending Agent that orchestrates an iterative, stance-based consensus discourse to resolve diagnostic discrepancies. On the RadGenome-ChestCT dataset, MARCH significantly outperforms state-of-the-art baselines in both clinical fidelity and linguistic accuracy. Our work demonstrates that modeling human-like organizational structures enhances the reliability of AI in high-stakes medical domains.

Paper Summary

Problem
Medical imaging, particularly 3D volumetric data like chest Computed Tomography (CT), is a cornerstone of modern diagnostic medicine. However, generating accurate, comprehensive, and clinically valid radiology reports remains a significant challenge. Current automated report generation systems often suffer from clinical hallucinations and lack the iterative verification and cross-checking found in human practice.
Key Innovation
The researchers propose MARCH (Multi-Agent Radiology Clinical Hierarchy), a multi-agent framework that emulates the professional hierarchy of radiology departments. MARCH assigns specialized roles to distinct agents, including a Resident Agent for initial drafting, multiple Fellow Agents for retrieval-augmented revision, and an Attending Agent that orchestrates an iterative, stance-based consensus discourse to resolve diagnostic discrepancies.
Practical Impact
MARCH has the potential to significantly improve the accuracy and reliability of AI in high-stakes medical domains. By modeling human-like organizational structures, MARCH can reduce cognitive errors in interpreting abnormal CT findings and generate reports that reduce the risk of single-reader misinterpretation. This can lead to better patient outcomes and more efficient clinical workflows.
Analogy / Intuitive Explanation
Imagine a team of radiologists working together to interpret a CT scan. The Resident Agent is like the junior radiologist who drafts the initial report. The Fellow Agents are like the more experienced radiologists who review and revise the report, providing additional insights and perspectives. The Attending Agent is like the team leader who oversees the entire process, ensuring that the final report is accurate and comprehensive. By working together, the team (or agents) can produce a more accurate and reliable report than any individual radiologist could alone.
Paper Information
Categories:
cs.AI cs.CV
Published Date:

arXiv ID:

2604.16175v1

Quick Actions