Planning as Descent: Goal-Conditioned Latent Trajectory Synthesis in Learned Energy Landscapes

Generative AI & LLMs
Published: arXiv: 2512.17846v1
Authors

Carlos Vélez García Miguel Cazorla Jorge Pomares

Abstract

We present Planning as Descent (PaD), a framework for offline goal-conditioned reinforcement learning that grounds trajectory synthesis in verification. Instead of learning a policy or explicit planner, PaD learns a goal-conditioned energy function over entire latent trajectories, assigning low energy to feasible, goal-consistent futures. Planning is realized as gradient-based refinement in this energy landscape, using identical computation during training and inference to reduce train-test mismatch common in decoupled modeling pipelines. PaD is trained via self-supervised hindsight goal relabeling, shaping the energy landscape around the planning dynamics. At inference, multiple trajectory candidates are refined under different temporal hypotheses, and low-energy plans balancing feasibility and efficiency are selected. We evaluate PaD on OGBench cube manipulation tasks. When trained on narrow expert demonstrations, PaD achieves state-of-the-art 95\% success, strongly outperforming prior methods that peak at 68\%. Remarkably, training on noisy, suboptimal data further improves success and plan efficiency, highlighting the benefits of verification-driven planning. Our results suggest learning to evaluate and refine trajectories provides a robust alternative to direct policy learning for offline, reward-free planning.

Paper Summary

Problem
The main challenge addressed by this research paper is the problem of offline goal-conditioned reinforcement learning, where an agent must infer how to achieve a user-specified goal purely from heterogeneous demonstrations, without access to online exploration or reward signals. This is particularly difficult in real-world domains like robotics, where interaction is expensive, unsafe, or impractical, and available data mostly consist of offline, reward-free trajectories collected under unknown and potentially suboptimal policies.
Key Innovation
The key innovation of this work is the development of a framework called Planning as Descent (PaD), which grounds trajectory synthesis in verification. Instead of learning a policy or explicit planner, PaD learns a goal-conditioned energy function over entire latent trajectories, assigning low energy to feasible, goal-consistent futures. Planning is realized as gradient-based refinement in this energy landscape, using identical computation during training and inference to reduce train-test mismatch.
Practical Impact
This research has practical implications for offline goal-conditioned reinforcement learning, particularly in robotics and other real-world domains. PaD provides a robust alternative to policy- and sampling-based approaches, achieving strong performance on challenging OGBench single-cube manipulation tasks, including state-of-the-art results when trained on narrow expert demonstrations. Moreover, training on diverse but highly suboptimal data can further improve both success rates and planning efficiency.
Analogy / Intuitive Explanation
Imagine planning as climbing a mountain. In traditional approaches, the agent learns a map of the terrain (policy or planner) and then uses it to navigate to the summit (achieve the goal). In contrast, PaD learns a "terrain" (energy landscape) that assigns low energy to feasible, goal-consistent paths and high energy to incompatible ones. Planning then arises implicitly as gradient descent in this learned terrain, iteratively refining candidate trajectories to minimize their energy. This approach provides a more principled and scalable foundation for offline goal-conditioned planning.
Paper Information
Categories:
cs.RO cs.AI
Published Date:

arXiv ID:

2512.17846v1

Quick Actions