We introduce Interaction-aware Referring Video Object Segmentation, a new task that
requires segmenting both actor and target entities involved in an interaction.
Click the arrows below to see more results.
Referring video object segmentation aims to segment the object in a video corresponding to a given natural language expression. While prior work has explored various referring scenarios, including motion-centric or multi-instance expressions, most approaches still focus on localizing a single target object in isolation. However, in comprehensive video understanding, an object’s role is often defined by its interactions with other entities, which are largely overlooked in existing datasets and models. In this work, we introduce InterRVOS, a new task that requires segmenting both actor and target entities involved in an interaction. Each interaction is described through a pair of complementary expressions from different semantic perspectives, enabling fine-grained modeling of inter-object relationships. To tackle this task, we propose InterRVOS-8K, the large-scale and automatically constructed dataset containing diverse interaction-aware expressions, including challenging cases such as motion-only multi-instance expressions. We also present a baseline architecture ReVIOSa designed to handle actor-target segmentation from a single expression, achieving strong performance on both standard and interaction-focused settings. Furthermore, we introduce a actor-target aware evaluation setting that enables a more targeted assessment of interaction understanding. Experimental results demonstrate that our approach outperforms prior methods in modeling complex object interactions for referring video object segmentation task, establishing a strong foundation for future research in interaction-centric video understanding.
Dataset | Pub. & Year | Videos | Object | Expression | Obj/Video | Actor-Target Interaction |
---|---|---|---|---|---|---|
A2D Sentence | CVPR 2018 | 3,782 | 4,825 | 6,656 | 1.28 | - |
J-HMDB Sentence | CVPR 2018 | 928 | 928 | 928 | 1 | - |
Ref-DAVIS | ACCV 2018 | 90 | 205 | 1,544 | 2.27 | - |
Ref-Youtube-VOS | ECCV 2020 | 3,978 | 7,451 | 15,009 | 1.86 | - |
MeViS | ICCV 2023 | 2,006 | 8,171 | 28,570 | 4.28 | - |
ReVOS | ECCV 2024 | 1,042 | 5,535 | 35,074 | 5.31 | - |
Ref-SAV | CVPRW 2025 | 37,311 | 72,509 | 72,509 | 1.94 | - |
InterRVOS-8K (Ours) | - | 8,738 | 35,247 | 127,314 | 4.03 | 17,682 |
Our newly proposed InterRVOS-8K offers the largest number of referring expressions and a high object-per-video ratio, enabling richer and more diverse visual grounding across complex scenes compared to existing benchmarks. Unlike existing datasets, InterRVOS-8K also provides interaction-aware referring expressions that explicitly distinguish between actor and target roles, enabling fine-grained understanding of visual interactions.
Click the arrows below to see more results.
Q : Please segment 'Man helping a child'
A : Sure, it's Actor and Target
@misc{jin2025interrvosinteractionawarereferringvideo,
title={InterRVOS: Interaction-aware Referring Video Object Segmentation},
author={Woojeong Jin and Seongchan Kim and Seungryong Kim},
year={2025},
eprint={2506.02356},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.02356},
}