Exploring Temporally-Aware Features for Point Tracking

1KAIST AI, 2Adobe Research
*Indicates Equal Contribution

Visual Comparison of Trajectory Initialization

Abstract

Point tracking in videos is a fundamental task with applications in robotics, video editing, and more. While many vision tasks benefit from pre-trained feature backbones to improve generalizability, point tracking has primarily relied on simpler backbones trained from scratch on synthetic data, which may limit robustness in real-world scenarios. Additionally, point tracking requires temporal awareness to ensure coherence across frames, but using temporally-aware features is still underexplored. Most current methods often employ a two-stage process: an initial coarse prediction followed by a refinement stage to inject temporal information and correct errors from the coarse stage. These approach, however, is computationally expensive and potentially redundant if the feature backbone itself captures sufficient temporal information. In this work, we introduce Chrono, a feature backbone specifically designed for point tracking with built-in temporal awareness. Leveraging pre-trained representations from self-supervised learner DINOv2 and enhanced with a temporal adapter, Chrono effectively captures long-term temporal context, enabling precise prediction even without the refinement stage. Experimental results demonstrate that Chrono achieves state-of the-art performance in a refiner-free setting on the TAP-Vid-DAVIS and TAP-Vid-Kinetics datasets, among common feature backbones used in point tracking as well as DINOv2, with exceptional efficiency.

Main Architecture

Our approach tackles robust point tracking by combining strong spatial representations with explicit temporal modeling. First, we leverage DINOv2, known for its powerful, self-supervised spatial features, to effectively handle real-world data. However, since DINOv2 alone lacks motion awareness, we augment it with a lightweight temporal adapter that processes information from neighboring frames. Placed between each transformer block of DINOv2, the temporal adapter reduces the spatial resolution of the extracted features, performs local-window temporal attention to capture point movements, and then restores the features to their original dimensions. Finally, a residual connection preserves essential details from the original feature map. By enhancing DINOv2 with this adapter, our architecture learns the spatiotemporal cues necessary for accurate point tracking across challenging video sequences.

Point Estimation Process

We measure the quality of feature extracted from Chrono by tracking points purely through feature matching. First, we take the feature vector of a query point from one frame and compare it against every spatial location in subsequent frames using cosine similarity. This generates a correlation map indicating how closely each pixel matches the query. Then, a soft-argmax operation pinpoints the most likely location by producing a weighted average of all positions, guided by their correlation scores. Because this approach relies solely on our feature backbone, without iterative or learnable modules, it highlights the backbone’s inherent ability to capture robust and discriminative representations. By masking positions beyond a local area of highest correlation, we further refine the prediction to reduce noise and more accurately track the point’s movement over time.


Quantitative Results

Quantitative comparison on the TAP-Vid datasets with the query first mode.
Quantitative comparison on the TAP-Vid datasets with the strided query mode.
Compared to other feature backbones widely used in point tracking, our model shows exceptional performance in position accuracy. Both Chrono (ViT-S/14) and Chrono (ViT-B/14) reach the highest position accuracy among backbones, demonstrating Chrono ’s ability to effectively model spatial and temporal information in videos.

Qualitative Results

Qualitative comparison of complex real-world video tracking. We qualitatively compare the results generated by Chrono with those from other commonly used backbones in point tracking. Our model demonstrates better smoothness and precision than other competitors.

ResNet

TSM-ResNet

Chrono (Ours)

PCA visualization comparison DINOv2 vs Chrono

PCA Analysis on Temporal Smoothness of Chrono Features. For the extracted feature, we apply Principal Component Analysis (PCA) to the high-dimensional feature vectors to project them onto a lower-dimensional space. The features derived from Chrono exhibit significantly greater temporal smoothness across consecutive frames compared to those from DINOv2. This enhanced smoothness is evident from the more continuous trajectories and reduced abrupt changes in the PCA projections over time.

BibTeX

BibTex Code Here