Empowering LLMs with Logical Reasoning: Challenges, Solutions, and Opportunities
Haoxuan Li, Fengxiang Cheng, Fenrong Liu, Robert van Rooij, Kun Zhang, Zhouchen Lin
https://sites.google.com/view/ijcai25-tutorial-logicllmLarge language models (LLMs) have achieved remarkable successes on various tasks. However, recent studies have found that there are still significant challenges to the logical reasoning abilities of LLMs, which can be categorized into the following two aspects: (1) Logical question answering: LLMs often fail to generate the correct answer within a complex logical problem which requires sophisticated deductive, inductive or abductive reasoning given a collection of premises and constrains. (2) Logical consistency: LLMs are prone to producing responses contradicting themselves across different questions. For example, a state-of-the-art question-answering LLM Macaw, answers “Yes” to both questions “Is a magpie a bird?” and “Does a bird have wings?” but answers “No” to “Does a magpie have wings?”.
In this tutorial, we comprehensively introduce the most cutting-edge methods with a proposed new taxonomy. Specifically, to accurately answer complex logic questions, previous methods can be categorized based on reliance on external solvers, prompts, and fine-tuning. To avoid logical contradictions, we discuss concepts and solutions of various logical consistencies, including implication, negation, transitivity, factuality consistencies, and their composites. In addition, we review commonly used benchmark datasets and evaluation metrics, and discuss promising research directions, such as extending to modal logic to account for uncertainty and developing efficient algorithms that simultaneously satisfy multiple logical consistencies.
Schedule: August 29th PM - Full afternoon