
and improve the performance of arbitrary Re-ID models.
Abstract
Person re-identification (Re-ID) often faces challenges due to variations in human poses and camera viewpoints, which significantly affect the appearance of individuals across images. Existing datasets frequently lack diversity and scalability in these aspects, hindering the generalization of Re-ID models to new camera systems or environments. To overcome this, we propose Pose-dIVE, a novel data augmentation approach that incorporates sparse and underrepresented human pose and camera viewpoint examples into the training data, addressing the limited diversity in the original training data distribution. Our objective is to augment the training dataset to enable existing Re-ID models to learn features unbiased by human pose and camera viewpoint variations. By conditioning the diffusion model on both the human pose and camera viewpoint through the SMPL model, our framework generates augmented training data with diverse human poses and camera viewpoints. Experimental results demonstrate the effectiveness of our method in addressing human pose bias and enhancing the generalizability of Re-ID models compared to other data augmentation-based Re-ID approaches.
Motivation and Overview

Pose-dIVE Framework

The Effect of Viewpoint and Human Pose Augmentation

Qualitative Results

Quantitative Results

Bibtex
@article{kim2024pose, title={Pose-DIVE: Pose-Diversified Augmentation with Diffusion Model for Person Re-Identification}, author={Kim, In{\`e}s Hyeonsu and Lee, JoungBin and Jin, Woojeong and Son, Soowon and Cho, Kyusun and Seo, Junyoung and Kwak, Min-Seop and Cho, Seokju and Baek, JeongYeol and Lee, Byeongwon and others}, journal={arXiv preprint arXiv:2406.16042}, year={2024} }
Acknowledgements
The website template was borrowed from Michaël Gharbi.