MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models
Abstract
Artificial Intelligence (AI) has demonstrated significant potential in healthcare, particularly in disease diagnosis and treatment planning. Recent progress in Medical Large Vision-Language Models (Med-LVLMs) has opened up new possibilities for interactive diagnostic tools. However, these models often suffer from factual hallucination, which can lead to incorrect diagnoses. Fine-tuning and retrieval-augmented generation (RAG) have emerged as methods to address these issues. However, the amount of high-quality data and distribution shifts between training data and deployment data limit the application of fine-tuning methods. Although RAG is lightweight and effective, existing RAG-based approaches are not sufficiently general to different medical domains and can potentially cause misalignment issues, both between modalities and between the model and the ground truth. In this paper, we propose a versatile multimodal RAG system, MMed-RAG, designed to enhance the factuality of Med-LVLMs. Our approach introduces a domain-aware retrieval mechanism, an adaptive retrieved contexts selection method, and a provable RAG-based preference fine-tuning strategy. These innovations make the RAG process sufficiently general and reliable, significantly improving alignment when introducing retrieved contexts. Experimental results across five medical datasets (involving radiology, ophthalmology, pathology) on medical VQA and report generation demonstrate that MMed-RAG can achieve an average improvement of 43.8% in the factual accuracy of Med-LVLMs. Our data and code are available in https://github.com/richard-peng-xia/MMed-RAG.
Community
π We introduce MMed-RAG, a powerful multimodal RAG system that boosts the factuality of Medical Vision-Language Models (Med-LVLMs) by up to 43.8%! π©Ίπ‘
π MMed-RAG enhances alignment across medical domains like radiology, pathology, and ophthalmology with a domain-aware retrieval mechanism. And it tackles three key challenges in alignment of multimodal RAG β:
1οΈβ£ Direct Copy Homework from Othersβ Think it by Self β
MMed-RAG helps Med-LVLMs avoid blindly copying external information by encouraging the model to rely on its own visual reasoning when solving complex problems. π§
2οΈβ£ Cannot Solve Problems by Selfβ Learn How to Copy β
When Med-LVLMs are unsure, MMed-RAG teaches the model to intelligently use retrieved knowledge, pulling in the right information at the right time, boosting accuracy, and reducing errors. ππ
3οΈβ£ Copied Homework is Wrongβ Avoid Interference from Incorrect Homework β
MMed-RAG prevents models from being misled by incorrect retrievals, reducing the risk of generating inaccurate medical diagnoses. π«π
π The results speak for themselves: a 43.8% improvement in factual accuracy across tasks like Medical VQA and report generation. This makes Med-LVLMs more reliable and trustworthy in critical healthcare applications! π
MMed-RAG has the potential to be extended beyond healthcare, offering solutions for more general domains where factual accuracy and reliable retrieval are critical!ππ‘
π
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Towards Reliable Medical Question Answering: Techniques and Challenges in Mitigating Hallucinations in Language Models (2024)
- LLaVA Needs More Knowledge: Retrieval Augmented Natural Language Generation with Knowledge Graph for Explaining Thoracic Pathologies (2024)
- ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models (2024)
- OrthoDoc: Multimodal Large Language Model for Assisting Diagnosis in Computed Tomography (2024)
- TUBench: Benchmarking Large Vision-Language Models on Trustworthiness with Unanswerable Questions (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper