Learning deep features for multiple object tracking by using a multi-task learning strategy

Learning deep features for multiple object tracking by using a multi-task learning strategy
Title:
Learning deep features for multiple object tracking by using a multi-task learning strategy
Other Titles:
2014 IEEE International Conference on Image Processing (ICIP)
DOI:
10.1109/ICIP.2014.7025168
Publication Date:
27 October 2014
Citation:
L. Wang, N. T. Pham, T. T. Ng, G. Wang, K. L. Chan and K. Leman, "Learning deep features for multiple object tracking by using a multi-task learning strategy," 2014 IEEE International Conference on Image Processing (ICIP), Paris, 2014, pp. 838-842. doi: 10.1109/ICIP.2014.7025168
Abstract:
Model-free object tracking is still challenging because of the limited prior knowledge and the unexpected variation of the target object. In this paper, we propose a feature learning algorithm for model-free multiple object tracking. First, we pre-learn generic features invariant to diverse motion transformations from auxiliary video data by using a deep network of anto-encoder. Then, we adapt the pre-learned features according to multiple target objects respectively in a multi-task learning manner. We treat the feature adaptation for each target object as one single task. We simultaneously learn the common feature shared by all target objects and the individual feature of each object. Experimental results demonstrate that our feature learning algorithm can significantly improve multiple object tracking performance.
License type:
PublisherCopyrights
Funding Info:
Description:
(c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
ISSN:
1522-4880
Files uploaded: