Referring Video Object Segmentation via Language Aligned Track Selection

1KAIST, 2Korea University
Corresponding Author *Indicates Equal Contributions
ArXiv 2025

SOLA: Selection by Object Language Alignment

SOLA leverages SAM2 object tokens as compact video-level representations, aligning them with language features via a lightweight track selection module and an IoU-based pseudo-labeling strategy, achieving state-of-the-art performance on MeViS.


The two vehicles parked on the side of the road
moving from right to left
It enthusiastically chases and pounces on the wand
baby tiger without moving position

Abstract

Referring video object segmentation (RVOS) requires tracking and segmenting an object throughout a video according to a given natural language expression, demanding both complex motion understanding and the alignment of visual representations with language descriptions. Given these challenges, the recently proposed Segment Anything Model 2 (SAM2) emerges as a potential candidate due to its ability to generate coherent segmentation mask tracks across video frames, and provide an inherent spatio-temporal objectness in its object token representations. In this paper, we introduce SOLA (Selection by Object Language Alignment), a novel framework that leverages SAM2 object tokens as compact video-level object representations, which are aligned with language features through a lightweight track selection module. To effectively facilitate this alignment, we propose an IoU-based pseudo-labeling strategy, which bridges the modality gap between SAM2 representations with language features. Extensive experiments show that SOLA achieves state-of-the-art performance on the MeViS dataset and demonstrate that SOLA offers an effective solution for RVOS.

Motivation

Motivation image 1

Our method effectively bridges the modality gap by aligning the features obtained from fully frozen uni-modal encoders: the video segmentation model such as SAM2 and the text encoder such as RoBERTa. By directly leveraging the token representations, our approach achieves lightweight multi-modal alignment while significantly reducing the number of trainable parameters.

Architecture

Method image

Overall pipeline of the proposed SOLA framework

Our method selects the correct object mask track among candidates via a language-aligned track selection module. We first generate candidate mask tracks and corresponding object tokens from the fully frozen SAM2. These tokens are then aligned with language expressions, producing alignment scores that indicate selection probabilities. Mask tracks with scores above a predefined threshold are selected and merged into the final binary segmentation mask. By leveraging precomputed object tokens from SAM2, our approach minimizes trainable parameters, enabling efficient training on a single GPU.

Qualitative Results

Quantitative Results

The best-performing results are presented in bold, while the second-best results are underlined.
Main Quantitative Results
Quantitative comparison on MeViS.
Zero-shot Quantitative Results
Zero-shot quantitative comparison on Ref-Youtube-VOS and Ref-DAVIS.
Zero-shot Quantitative Results
Quantitative comparison on combined dataset (MeViS + Ref-Youtube-VOS).

BibTeX

@misc{kim2024referringvideoobjectsegmentation,
      title={Referring Video Object Segmentation via Language-aligned Track Selection}, 
      author={Seongchan Kim and Woojeong Jin and Sangbeom Lim and Heeji Yoon and Hyunwook Choi and Seungryong Kim},
      year={2024},
      eprint={2412.01136},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.01136}, 
}