Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding Paper • 2311.16922 • Published Nov 28, 2023 • 1
RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback Paper • 2312.00849 • Published Dec 1, 2023 • 8