arXiv 2026

TrackCraft3R: Repurposing Video Diffusion Transformers for Dense 3D Tracking

Jisu Nam1 Jahyeok Koo1 Soowon Son1 Jaewoo Jung1 Honggyu An1 Junhwa Hur2,† Seungryong Kim1,†

1 KAIST 2 Google DeepMind † Corresponding authors

📄 Paper 🗂 arXiv 💻 Code

Dense 3D Tracking in the Wild

TrackCraft3R is the first method that repurposes a video diffusion transformer as a feed-forward dense 3D tracker. Click any thumbnail to view the predicted dense 3D tracks on real-world videos.

Abstract

Dense 3D tracking from monocular video is fundamental to dynamic scene understanding. While recent 3D foundation models provide reliable per-frame geometry, recovering object motion in this geometry remains challenging and benefits from strong motion priors learned from real-world videos. Existing 3D trackers either follow iterative paradigms trained from scratch on synthetic data or fine-tune 3D reconstruction models learned from static multi-view images, both lacking real-world motion priors. Pre-trained video diffusion transformers (video DiTs) offer rich spatio-temporal priors from internet-scale videos, making them a promising foundation for 3D tracking. However, their frame-anchored formulation, which generates each frame's content, is fundamentally mismatched with reference-anchored dense 3D tracking, which must follow the same physical points from a reference frame across time.

We present TrackCraft3R, the first method to repurpose a video DiT as a feed-forward dense 3D tracker. Given a monocular video and its frame-anchored reconstruction pointmap, TrackCraft3R predicts a reference-anchored tracking pointmap that follows every pixel of the first frame across time in a single forward pass, along with its visibility. We achieve this through two designs: (i) a dual-latent representation that uses per-frame geometry latents and reference-anchored track latents as dense queries; and (ii) temporal RoPE alignment, which specifies the target timestamp of each track latent. Together, these designs convert the per-frame generative paradigm of video DiTs into a reference-anchored tracking formulation with LoRA fine-tuning.

TrackCraft3R achieves state-of-the-art performance on standard sparse and dense 3D tracking benchmarks, while running 1.3× faster and using 4.6× less peak memory than the strongest prior method. We further demonstrate robustness to large motions and long videos.

Method

Pointmap Representation

Reconstruction vs. tracking pointmap

We adopt two pointmap representations in a shared world coordinate frame. The frame-anchored reconstruction pointmap $\mathbf{P}_j(t_j)$ gives the 3D positions of frame $\mathbf{I}_j$ at its own timestamp $t_j$, readily obtained from off-the-shelf depth and camera-pose estimators. The reference-anchored tracking pointmap $\mathbf{P}_0(t_j)$ gives the 3D positions of the content originally seen in $\mathbf{I}_0$ at timestamp $t_j$.

Goal. Given a video $\mathbf{V} = \{\mathbf{I}_j\}_{j=0}^{F}$ and its reconstruction pointmaps $\{\mathbf{P}_j(t_j)\}_{j=0}^{F}$, predict the tracking pointmaps $\{\mathbf{P}_0(t_j)\}_{j=0}^{F}$ that establish dense 3D correspondences across time, together with visibility maps $\{\mathbf{o}_j\}_{j=0}^{F}$.

Overall Architecture

TrackCraft3R architecture

Each RGB frame and its reconstruction pointmap are encoded by separate VAE encoders into latents $\mathbf{z}_j^{\text{rgb}}$ and $\mathbf{z}_j^{\text{pm}}$. We repurpose the video DiT's full 3D attention with two key designs.

â‘  Dual-Latent Representation

The geometry latent $\mathbf{g}_j = [\mathbf{z}_j^{\text{rgb}};\, \mathbf{z}_j^{\text{pm}}]$ couples RGB appearance and 3D geometry at timestamp $t_j$ via channel-wise concatenation. The first-frame-anchored track latent $\mathbf{r}_j = \mathbf{g}_0$ replicates the first-frame geometry latent across all timestamps and serves as a dense query for tracking. Once $\mathbf{r}_j$ matches the same physical point in $\mathbf{g}_j$ via attention, the matched pointmap latent directly provides its 3D position.

â‘¡ Temporal RoPE Alignment

To make each track latent attend to the geometry latent at the correct timestamp, we repurpose the temporal axis of 3D RoPE and assign $\mathbf{g}_j$ and $\mathbf{r}_j$ the same temporal index $t_j$. Since RoPE encodes relative position, tokens with identical temporal indices exhibit stronger attention, so each $\mathbf{r}_j$ attends to its $\mathbf{g}_j$ at timestamp $t_j$.

The track-latent outputs are then decoded by two VAE decoders into a residual displacement $\hat{\boldsymbol{\Delta}}_j$ and a visibility map $\hat{\mathbf{o}}_j$, and the tracking pointmap is recovered as $\hat{\mathbf{P}}_0(t_j) = \mathbf{P}_0(t_0) + \hat{\boldsymbol{\Delta}}_j$.

Attention Visualization

Query point in green. (a) Attention from $\mathbf{r}_5$ to $\{\mathbf{g}_j\}$ concentrates on $\mathbf{g}_5$: RoPE aligns each track latent with the correct timestamp. (b) Within $\mathbf{g}_5$, attention finds the same physical point under motion, yielding accurate dense correspondence between track and geometry latents.

(a)
query
$\mathbf{r}_5$, Query point
g_2
$\mathbf{g}_2$
g_5
$\mathbf{g}_5$
g_8
$\mathbf{g}_8$
(b)
query
$\mathbf{r}_5$, Query point
layer 14
$\mathbf{g}_5$, Layer 14
layer 15
$\mathbf{g}_5$, Layer 15
layer 16
$\mathbf{g}_5$, Layer 16

Quantitative Results on World-Coordinate 3D Tracking

We report AJ, APD$_\text{3D}$, and OA after Sim(3) alignment. TrackCraft3R + ViPE surpasses DELTAv2 + ViPE and all feed-forward baselines, achieving the best AJ, APD$_\text{3D}$, and OA. TrackCraft3R + DA3 further surpasses DELTAv2 + DA3 and all feed-forward baselines by a large margin.

Method ADT PStudio DR PO Kubric Average
AJAPDOA AJAPDOA AJAPDOA AJAPDOA AJAPDOA AJAPDOA
(i) Iterative dense 3D trackers (use external depth and camera as input)
DELTA + ViPE 0.5090.6950.814 0.4990.7810.696 0.4050.5850.764 0.4560.6290.812 0.2890.3720.963 0.4320.6120.810
DELTAv2 + ViPE 0.5140.7070.804 0.5350.8030.728 0.4170.5890.783 0.4460.6250.801 0.2860.3690.956 0.4400.6180.814
DELTAv2 + DA3 0.6150.8220.813 0.5570.8500.709 0.4490.6220.782 0.5300.7250.802 0.3350.4110.959 0.4980.6860.813
(ii) Feed-forward dense 3D trackers (pre-trained for 3D reconstruction)
St4RTrack 0.5930.7680.832 0.5720.7550.810 0.3530.5710.684 0.3970.6580.686 0.1190.1900.770 0.4070.5880.756
Any4D 0.4650.6130.836 0.4220.5710.813 0.4410.6960.680 0.4390.6830.735 0.3890.4970.883 0.4310.6120.789
TraceAnything 0.5930.7630.841 0.5230.6930.813 0.2070.3550.733 0.2040.3650.693 0.2420.3250.820 0.3540.5000.780
(iii) Feed-forward dense 3D trackers (pre-trained for video generation)
MotionCrafter 0.4460.6040.804 0.5040.6660.814 0.4930.6170.917 0.4200.6410.730 0.2180.3010.873 0.4160.5660.828
TrackCraft3R + ViPE 0.6680.7690.941 0.6800.8160.894 0.5840.7030.941 0.5840.7260.894 0.3030.3940.960 0.5640.6820.926
TrackCraft3R + DA3 0.8630.9510.945 0.7290.8710.889 0.6520.7710.939 0.7290.8680.894 0.4210.5050.959 0.6790.7930.925

Robustness to Long Videos and Large Motion

For large motion, we fix the clip length to 12 frames and increase the temporal stride $s$ from 1 to 12 (in steps of 1), enlarging per-frame displacement. For long videos, we fix the stride to $s{=}1$ and increase the sequence length $L$ from 12 to 120 (in steps of 12). Curves are averaged over the sparse 3D tracking benchmarks.

APD vs stride
(a) $\text{APD}_\text{3D}$, varying stride $s$
AJ vs stride
(b) AJ, varying stride $s$
APD vs length
(c) $\text{APD}_\text{3D}$, varying length $L$
AJ vs length
(d) AJ, varying length $L$

TrackCraft3R's performance drops much more slowly than DELTAv2 as stride $s$ or frame length $L$ grows, indicating that the learned motion prior generalizes well to large displacements and to long horizons beyond the training length (12 frames).

Inference Efficiency

We compare inference time and peak GPU memory of TrackCraft3R, DELTA, and DELTAv2 at $448{\times}448$ resolution for 12- and 23-frame clips on a single NVIDIA A6000 GPU.

Frames Method Time (s) ↓ Memory (GB) ↓
12DELTA14.6429.97
12DELTAv25.0035.46
12TrackCraft3R (ours)3.917.63
23DELTA28.9230.78
23DELTAv29.7035.90
23TrackCraft3R (ours)7.847.63

TrackCraft3R is faster and lighter because it predicts trajectories in a single forward pass within a $1/16$ spatially compressed latent space, replacing the iterative refinement and explicit 4D cost volumes of DELTA and DELTAv2 with full 3D attention. The same trend holds for longer sequences.

Qualitative Comparison

We compare 3D trajectories predicted by TrackCraft3R and DELTAv2 on real-world ITTO and DAVIS videos. TrackCraft3R accurately estimates dense 3D trajectories under large camera motion, object motion, and occlusion, where DELTAv2 often fails. Note that the same query points are shared across methods.

Qualitative comparison vs. DELTAv2 on ITTO and DAVIS

Limitations and Future Work

BibTeX