URECA💡: Unique Region Caption Anything

1Korea University, 2Yonsei University, 3KAIST AI
*Equal Contribution, Corresponding Author
ArXiv 2025
🔍 Click to zoom in
We introduce URECA dataset, a novel region-level captioning dataset designed to ensure caption uniqueness and support multi-granularity regions. Each caption in our benchmark is uniquely mapped to its corresponding region, capturing distinctive attributes that differentiate it from surrounding areas. Moreover, we show that our proposed model trained on our dataset effectively generates unique captions for regions at any level of granularity.

Abstract

Region-level captioning aims to generate natural language descriptions for specific image regions while highlighting their distinguishing features. However, existing methods struggle to produce unique captions across multi-granularity, limiting their real-world applicability. To address the need for detailed region-level understanding, we introduce URECA dataset, a large-scale dataset tailored for multi-granularity region captioning. Unlike prior datasets that focus primarily on salient objects, URECA dataset ensures a unique and consistent mapping between regions and captions by incorporating a diverse set of objects, parts, and background elements. Central to this is a stage-wise data curation pipeline, where each stage incrementally refines region selection and caption generation. By leveraging Multimodal Large Language Models (MLLMs) at each stage, our pipeline produces distinctive and contextually grounded captions with improved accuracy and semantic diversity. Building upon this dataset, we present URECA, a novel captioning model designed to effectively encode multi-granularity regions. URECA maintains essential spatial properties such as position and shape through simple yet impactful modifications to existing MLLMs, enabling fine-grained and semantically rich region descriptions. Our approach introduces dynamic mask modeling and a high-resolution mask encoder to enhance caption uniqueness. Experiments show that URECA achieves state-of-the-art performance on URECA dataset and generalizes well to existing region-level captioning benchmarks.

URECA Dataset

Automated data curation pipeline of URECA dataset. Our pipeline consists of four key stages to generate unique captions for multi-granularity regions. In Stage 1, we construct a mask tree that captures hierarchical relationships between regions. Stage 2 generates short captions based on the parent node. Stage 3 aggregates captions from child nodes, and Stage 4 ensures that each node is assigned a unique caption.

Method image

Dataset Visualization

Method image

💡URECA

Teaser image

URECA enables users to generate unique captions that describe distinctive attributes of any region. The mask encoder effectively encodes multi-granularity regions while preserving their identity. The mask token serves as a localizer, guiding the LLM to generate region-specific captions based on the image and query token.

Qualitative Results

Qualitative results of the URECA and comparison models. Our model generates unique caption conditioned on multi-granularity regions.

Method image Method image

Quantitative Results

Performance comparison of URECA with baseline methods and previous models on various evaluation metrics, including BLEU, ROUGE, METEOR, and BERTScore. The results show that URECA outperforms other methods across all metrics on URECA testset, demonstrating its superior ability to generate unique captions for multi-granularity regions. Note that comparison methods are all trained on URECA dataset.

Method image

Zero-shot Performance

Quantitative results on region-level captioning task. Performance comparison on the METEOR for the RefCOCOg and Visual Genome datasets. (Zero-Shot) refers to zero-shot transfer.

Method image

BibTeX

@article{lim2025ureca,
  title={URECA: Unique Region Caption Anything},
  author={Lim, Sangbeom and Kim, Junwan and Yoon, Heeji and Jung, Jaewoo and Kim, Seungryong},
  journal={arXiv preprint arXiv:2504.05305},
  year={2025}
}