This academic paper discusses the concept of introspective planning for Large Language Models (LLMs). These models, which are used in robots to understand natural language instructions and plan actions, often suffer from ‘hallucinations’ that can lead to misaligned or unsafe plans. The inherent ambiguity in natural language can also induce task uncertainty. The paper proposes introspective planning as a systematic method to guide LLMs in forming uncertainty-aware plans for robotic task execution. The authors find that introspective planning significantly improves success rates and safety compared to other LLM-based planning approaches. They also explore the combination of introspective planning with conformal prediction, which provides tighter confidence bounds and maintains statistical success guarantees with fewer unnecessary user clarification queries.

 

Publication date: 12 Feb 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2402.06529