OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
The article presents OPERA, a novel decoding method for multi-modal large language models (MLLMs) to reduce hallucinations. The approach is based on an Over-trust Penalty and a Retrospection-Allocation strategy. This…
Continue reading