Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular ...
Abstract: To address the challenges of inter-modal alignment and the inability of single methods to fully exploit cross-modal semantic information in multimodal representation learning, this paper ...