CATs++: Boosting Cost Aggregation
with Convolutions and Transformers
TPAMI 2022
* Equal contribution
Comparison between CATs (left) and CATs++ (right).
Abstract
Cost aggregation is a process in image matching tasks that aims to disambiguate the noisy matching scores. Existing methods generally tackle this by hand-crafted or CNN-based methods, which either lack robustness to severe deformations or inherit the limitation of CNNs that fail to discriminate incorrect matches due to limited receptive fields and inadaptability. In this paper, we introduce Cost Aggregation with Transformers (CATs) to tackle this by exploring global consensus among initial correlation map with the help of some architectural designs that allow us to benefit from global receptive fields of self-attention mechanism. To this end, we include appearance affinity modeling, which helps to disambiguate the noisy initial correlation maps. Furthermore, we introduce some techniques, including multi-level aggregation to exploit rich semantics prevalent at different feature levels and swapping self-attention to obtain reciprocal matching scores to act as a regularization. Although CATs can attain competitive performance, it may face some limitations, \textit{i.e.}, high computational costs, which may restrict its applicability only at limited resolution and hurt performance. To overcome this, we propose CATs++, an extension of CATs. Concretely, we introduce early convolutions prior to cost aggregation with a transformer to control the number of tokens and inject some convolutional inductive bias, then propose a novel transformer architecture for both efficient and effective cost aggregation, which results in apparent performance boost and cost reduction. With the reduced costs, we are able to compose our network with a hierarchical structure to process higher-resolution inputs. We show that the proposed method with these integrated outperforms the previous state-of-the-art methods by large margins.
Architecture
Overall network architecture of CATs++.
Intuition of the proposed components:
(left) efficient transformer-based
aggregator (in CATs++),
(right) standard transformer-based aggregator (in CATs).
Citation
Acknowledgements
Thanks to our family Podo (cat), Aru (dog) and Dubu (dog) for their support. We love you.
The website template was borrowed from Michaël Gharbi.