This article discusses the application, evaluation and limitations of Large Language Models (LLMs) in Natural Language Inference (NLI). The focus is on ethical NLI, exploring how neuro-symbolic techniques can improve the logical validity and alignment of ethical explanations produced by LLMs. The authors present an abductive-deductive framework named Logic-Explainer, which integrates LLMs with an external backward-chaining solver to refine step-wise natural language explanations. The research suggests the effectiveness of neuro-symbolic methods for multi-step NLI, offering potential to enhance the logical consistency, reliability, and alignment of LLMs.

 

Publication date: 2 Feb 2024
Project Page: Not Provided
Paper: https://arxiv.org/pdf/2402.00745