DecoVLN: Decoupling Observation, Reasoning, and Correction for Vision-and-Language Navigation

Generative AI & LLMs
Published: arXiv: 2603.13133v1
Authors

Zihao Xin Wentong Li Yixuan Jiang Bin Wang Runming Cong Jie Qin Shengjun Huang

Abstract

Vision-and-Language Navigation (VLN) requires agents to follow long-horizon instructions and navigate complex 3D environments. However, existing approaches face two major challenges: constructing an effective long-term memory bank and overcoming the compounding errors problem. To address these issues, we propose DecoVLN, an effective framework designed for robust streaming perception and closed-loop control in long-horizon navigation. First, we formulate long-term memory construction as an optimization problem and introduce adaptive refinement mechanism that selects frames from a historical candidate pool by iteratively optimizing a unified scoring function. This function jointly balances three key criteria: semantic relevance to the instruction, visual diversity from the selected memory, and temporal coverage of the historical trajectory. Second, to alleviate compounding errors, we introduce a state-action pair-level corrective finetuning strategy. By leveraging geodesic distance between states to precisely quantify deviation from the expert trajectory, the agent collects high-quality state-action pairs in the trusted region while filtering out the polluted data with low relevance. This improves both the efficiency and stability of error correction. Extensive experiments demonstrate the effectiveness of DecoVLN, and we have deployed it in real-world environments.

Paper Summary

Problem
The main problem addressed by this research paper is the challenge of Vision-and-Language Navigation (VLN). VLN requires an autonomous agent to interpret natural language instructions and navigate through complex, previously unseen environments based solely on egocentric visual observations. However, existing approaches face two major challenges: constructing an effective long-term memory bank and overcoming the compounding errors problem.
Key Innovation
The key innovation of this work is the DecoVLN framework, which decouples observation, reasoning, and correction for VLN. This framework tackles long-horizon navigation by explicitly decoupling these three processes, allowing the agent to gather observations continuously while simultaneously executing actions and reasoning. DecoVLN achieves robust navigation in unknown environments using only ego-centric RGB inputs.
Practical Impact
The practical impact of this research is significant. DecoVLN can be applied in real-world environments, such as robots or self-driving cars, to enable them to navigate complex spaces and follow instructions from humans. The framework's ability to learn from experience and correct errors online makes it particularly useful for applications where safety is critical. Furthermore, DecoVLN's data efficiency and generalization potential make it a promising solution for tasks where large-scale datasets are not available.
Analogy / Intuitive Explanation
Imagine you are lost in a new city and need to follow a set of directions to find your way. You might look at a map, ask for directions, and then try to follow the instructions. However, if you make a mistake early on, it can be difficult to get back on track. DecoVLN is like having a personal navigation assistant that can continuously gather information, reason about the instructions, and correct any mistakes it makes in real-time. This allows it to stay on course and find the destination efficiently, even in complex and unfamiliar environments.
Paper Information
Categories:
cs.RO
Published Date:

arXiv ID:

2603.13133v1

Quick Actions