- Session 1: Deep Learning -- Day 2 (Nov.18), talks: 09:00-11:00 (5th floor Hall 1), poster session: 11:00-13:30
- Poster number: Mon16
- Download paper
Junfei Zhuang (Beijing University of Posts and Telecommunications & Beijing FaceAll Co); Yuan Dong ( Beijing University of Posts and Telecommunications); Hongliang Bai (Beijing Faceall Technology Co.,Ltd); wang gang (srcb)
Recently, Siamese networks have drawn great attention in the visual tracking community because of their balanced accuracy and speed. However, most existing Siamese frameworks describe the target appearance using a global pattern from the last layer, leading to high sensitivity to similar distractors, non-rigid appearance change, and partial occlusion. Addressing these issues, we propose a Multi-branch Siamese network (MSiam) for high-performance object tracking. The MSiam performs layer-wise feature aggregations and simultaneously considers the global-local patterns for more accurate target tracking. In particular, we propose a feature aggregation module (FAM) keeping the heterogeneity of the three types of features, further improving the discriminability of MSiam using both high-level semantic and low-level spatial information. To enhance the adaptability to non-rigid appearance change and partial occlusion, a multi-scale local pattern detection module (LPDM) is designed to identify discriminative regions of the target objects. By considering various combinations of the local structures, our tracker can form various types of structure patterns. Extensive evaluations on five benchmarks demonstrate that the proposed tracking algorithm performs favorably against state-of-the-art methods while running beyond real-time.