VISTA: Enhancing Visual Conditioning via Track-Following Preference Optimization in VLAs

1Georgia Institute of Technology 2Nvidia 3Microsoft GenAI 4University of Oxford 5ARM
Work done during an internship at Microsoft.

Work done while working at Microsoft.

Overview

  1. Key idea: We study and demonstrate the importance of visual conditioning for VLA performance.
  2. Key method: We introduce VISTA, a new training framework that improves visual conditioning in VLAs by aligning action predictions to visual tracks.
  3. Major Results: VISTA enhances visual conditioning of OpenVLA, and improves performance of both OpenVLA and OpenVLA-OFT.
Teaser figure
VISTA Overview.

Abstract

Vision–Language–Action (VLA) models have demonstrated strong performance across a wide range of robotic manipulation tasks. Despite the success, extending large pretrained Vision-Language Models (VLMs) to the action space can induce vision-action misalignment, where action predictions exhibit weak dependence on the current visual state, leading to unreliable action outputs. In this work, we study VLA models through the lens of visual conditioning and empirically show that successful rollouts consistently exhibit stronger visual dependence than failed ones. Motivated by this observation, we propose a training framework that explicitly strengthens visual conditioning in VLA models. Our approach first aligns action prediction with visual input via preference optimization on a track-following surrogate task, and then transfers the enhanced alignment to instruction-following task through latent-space distillation during supervised finetuning. Without introducing architectural modifications or additional data collection, our method improves both visual conditioning and task performance for discrete OpenVLA, and further yields consistent gains when extended to the continuous OpenVLA-OFT setting.

Visual Conditioning Study and Results

Visual conditioning result visualization

Token-level visual conditioning of 8-step OpenVLA and VISTA (Ours) in LIBERO-Spatial.
The vertical grids indicate that every 7 tokens decode to 1 action (56 tokens for 8 actions in total).

  • We quantify VLA token-level visual conditioning as KL divergence between the action distributions conditioned on clean and perturbed visual inputs.
  • We study baseline autoregressive OpenVLA and show that failed rollouts feature consistently weaker visual conditioning than successful rollouts.
  • Our method (VISTA) enhances visual conditioning and improves task performance (see below)

Method

Method

Illustration of VISTA training recipe.

VISTA is a training framework consisting of 3 stages:
  1. Regular instruction-following Supervised finetuning (SFT).
  2. Track-following Direct Preference Optimization (DPO) for aligning vision and action.
  3. Instruction-following SFT with latent distillation (cosine similary by default) from the aligned model.

LIBERO Experimental Results and Demos

LIBERO experimental results:
LIBERO_results
LIBERO demo videos (discrete VISTA):

CALVIN Experimental Results and Demos

CALVIN experimental results:
LIBERO_results
CALVIN demo videos (continuous VISTA-OFT):

Analysis

Method

Visual Conditioning Across VISTA Training Stages.

Stage 1 with DPO significantly improves the visual conditioning in the track-following task.
Stage 2 with distillation transfers the enhanced visual conditioning to the instruction-following task compared with pure SFT in Stage 0.

BibTeX

@article{chen2025enhancing,
  title={Enhancing Visual Conditioning via Track-Following Preference Optimization in Vision-Language-Action Models},
  author={First Author and Second Author and Third Author},
  journal={arXiv preprint arXiv:XXXX.YYYYY},
  year={2026},
}