Robust multi-modality anchor graph-based label prediction for RGB-infrared tracking
journal contributionposted on 19.03.2020, 13:45 authored by Xiangyuan Lan, Wei Zhang, Shengping Zhang, Kumar J. Deepak, Huiyu Zhou
Given massive video data generated from different applications such as security monitoring and traffic management, to save cost and human labour, developing an industrial intelligent video analytic system, which can automatically extract and analyze the meaningful content of videos, is essential. For achieving the objective of motion perception in video analytic system, a key problem is how to perform effective tracking of object of interest so that the location and the status of the tracked object can be inferred accurately. To solve this problem, with the popularity of RGB-infrared dual camera systems, this paper proposes a new RGB-infrared tracking framework which aims to exploit information from both RGB and infrared modalities to enhance the tracking robustness. In particular, within the tracking framework, a robust multi-modality anchor graph-based label prediction model is developed, which is able to 1) construct a scalable graph representation of the relationship of the samples based on local anchor approximation; 2) defuse a limited amount of known labels to large amount of unlabeled sample efficiently based on transductive learning strategy; and 3) adaptive incorporate importance weights for measuring modality discriminability. Efficient optimization algorithms are derived to solve the prediction model. Experimental results on various multi- modality videos demonstrate the effectiveness of the proposed method.