LLM-Guided Belief-Space Planning

Enhancing TAMP with LLM exploration in partially observable environments

Princeton PRPL Lab | September 2025 - Present Advisor: Professor Tom Silver

Overview

This ongoing research project explores how large language models can guide exploration in belief-space task and motion planning (TAMP) for partially observable environments. The key insight is leveraging LLMs’ common-sense reasoning capabilities to generate diverse, uncertainty-aware plans.

Key Contributions

  • LLM-based Policy Generation: Developing methods to use few-shot learning from trajectory data to generate diverse belief-space policies that are aware of uncertainty and partial observability.

  • Closed-Loop Policy Synthesis: Exploring how to synthesize policies that balance information-gathering actions (to reduce uncertainty) with goal-directed actions (to accomplish tasks).

  • Common-Sense Reasoning Integration: Investigating how LLMs’ world knowledge can inform decision-making in scenarios where classical planning approaches struggle with the exploration-exploitation tradeoff.

Impact

This work aims to enable robots to operate more effectively in real-world scenarios where complete observability is impossible, such as searching for objects in cluttered environments or manipulating items with occluded properties.