[1]程旭,张毅锋,刘袁,等.基于深度特征的目标跟踪算法[J].东南大学学报(自然科学版),2017,47(1):1-5.[doi:10.3969/j.issn.1001-0505.2017.01.001]
 Cheng Xu,Zhang Yifeng,Liu Yuan,et al.Object tracking algorithm based on deep feature[J].Journal of Southeast University (Natural Science Edition),2017,47(1):1-5.[doi:10.3969/j.issn.1001-0505.2017.01.001]
点击复制

基于深度特征的目标跟踪算法()
分享到:

《东南大学学报(自然科学版)》[ISSN:1001-0505/CN:32-1178/N]

卷:
47
期数:
2017年第1期
页码:
1-5
栏目:
计算机科学与工程
出版日期:
2017-01-18

文章信息/Info

Title:
Object tracking algorithm based on deep feature
作者:
程旭123张毅锋13刘袁13崔锦实3周琳1
1东南大学信息科学与工程学院, 南京 210096; 2南京船舶雷达研究所, 南京 210003; 3北京大学机器感知与智能教育部重点实验室, 北京 100871
Author(s):
Cheng Xu123 Zhang Yifeng13 Liu Yuan13 Cui Jinshi3 Zhou Lin1
1 School of Information Science and Engineering, Southeast University, Nanjing 210096, China
2 Nanjing Marine Radar Institute, Nanjing 210003, China
3 Key Laboratory of Machine Perception of Ministry of Education, Peking University, Beijing 100871, China
关键词:
视觉跟踪 深度学习 稀疏表示 模板更新
Keywords:
visual tracking deep learning sparse representation template updating
分类号:
TP391
DOI:
10.3969/j.issn.1001-0505.2017.01.001
摘要:
针对跟踪过程中运动目标的鲁棒性问题,提出了一种基于深度特征的跟踪算法.首先,利用仿射变换对每一帧图像进行归一化处理.然后,利用深度去噪自编码器提取归一化图像的特征.由于提取的特征维数巨大,为了提高计算效率,提出了一种高效的基于稀疏表示的降维方法,通过投影矩阵将高维特征投影到低维空间,进而结合粒子滤波方法完成目标跟踪.最后,将初始帧的目标信息融入到目标表观更新过程中,降低了跟踪过程中目标发生漂移的风险.实验结果表明,所提出的视觉跟踪算法在6段视频序列上获得了较高的准确度,能够在遮挡、光照变化、尺度变化和目标快速运动的条件下稳定地跟踪目标.
Abstract:
To solve the robustness problem of the motion object in the tracking process, a tracking algorithm based on deep feature is proposed. First, each frame in the video is normalized by affine transformation. Then, the object feature is extracted from the normalized image by the stacked denoising autoencoder. Because of the large dimensions of deep feature, to improve the computational efficiency, an effective dimension reduction method based on sparse representation is presented. The high dimensional features are projected into the low dimensional space by the projection matrix. The object tracking is achieved by combing the particle filter algorithm. Finally, the object information of the first frame is integrated into the updating process of the object appearance to reduce the risk of object drift during the tracking process. The experimental results show that the proposed tracking algorithm exhibits a high degree of accuracy in six video sequences, and it can stably track the object under the circumstance of occlusion, illumination change, scale variation and fast motion.

参考文献/References:

[1] Ross D A, Lim J, Lin R S, et al. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision, 2008, 77(1): 125-141. DOI:10.1007/s11263-007-0075-7.
[2] Adam A, Rivlin E, Shimshoni I. Robust fragments-based tracking using the integral histogram [C]//2006 IEEE Conference on Computer Vision and Pattern Recognition. New York, USA, 2006: 798-805.
[3] Kwon J, Lee K M. Visual tracking decomposition [C]//2010 IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, CA, USA, 2010: 1269-1276.
[4] Babenko B, Yang M H, Belongie S. Visual tracking with online multiple instance learning [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632. DOI:10.1109/TPAMI.2010.226.
[5] Zhang K, Song H. Real-time visual tracking via online weighted multiple instance learning[J]. Pattern Recognition, 2013, 46(1): 397-411. DOI:10.1016/j.patcog.2012.07.013.
[6] Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422. DOI:10.1109/TPAMI.2011.239.
[7] Zhang T, Ghanem B, Liu S, et al. Robust visual tracking via structured multi-task sparse learning [J]. International Journal of Computer Vision, 2013, 101(2): 367-383. DOI:10.1007/s11263-012-0582-z.
[8] Mei X, Ling H. Robust visual tracking and vehicle classification via sparse representation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(11): 2259-2272. DOI:10.1109/TPAMI.2011.66.
[9] Bao C, Wu Y, Ling H, et al. Real time robust L1 tracker using accelerated proximal gradient approach [C]//2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, Rhode Island, USA, 2012: 1830-1837.
[10] Zhong W, Lu H, Yang M H. Robust object tracking via sparse collaborative appearance model [J]. IEEE Transactions on Image Processing, 2014, 23(5): 2356-2368. DOI:10.1109/TIP.2014.2313227.
[11] Cheng X, Li N, Zhou T, et al. Object tracking via collaborative multi-task learning and appearance model updating [J]. Applied Soft Computing, 2015, 31: 81-90. DOI:10.1016/j.asoc.2015.03.002.
[12] Li H, Li Y, Porikli F. Robust online visual tracking with a single convolutional neural network [C]//2014 Asian Conference on Computer Vision. Singapore, 2014: 194-209. DOI:10.1007/978-3-319-16814-2_13.
[13] Wang N, Yeung D Y. Learning a deep compact image representation for visual tracking [C]//2013 Advances in Neural Information Processing Systems. Lake Tahoe, CA,USA, 2013: 809-817.
[14] Wang N, Li S, Gupta A, et al. Transferring rich feature hierarchies for robust visual tracking [EB/OL].(2015-04-23)[2016-02-19]. https://arxiv.org/abs/1501.04587.
[15] Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786): 504-507. DOI:10.1126/science.1127647.

相似文献/References:

[1]李林超,曲栩,张健,等.基于特征级融合的高速公路异质交通流数据修复方法[J].东南大学学报(自然科学版),2018,48(5):972.[doi:10.3969/j.issn.1001-0505.2018.05.029]
 Li Linchao,Qu Xu,Zhang Jian,et al.Missing value imputation method for heterogeneous traffic flow data based on feature fusion[J].Journal of Southeast University (Natural Science Edition),2018,48(1):972.[doi:10.3969/j.issn.1001-0505.2018.05.029]

备注/Memo

备注/Memo:
收稿日期: 2016-06-27.
作者简介: 程旭(1983—),男,博士; 张毅锋(联系人),男,博士,副教授, yfz@seu.edu.cn.
基金项目: 国家自然科学基金资助项目(61571106)、江苏省自然科学基金资助项目(BK20151102)、北京大学机器感知与智能教育部重点实验室开放课题资助项目(K-2016-03).
引用本文: 程旭,张毅锋,刘袁,等.基于深度特征的目标跟踪算法[J].东南大学学报(自然科学版),2017,47(1):1-5. DOI:10.3969/j.issn.1001-0505.2017.01.001.
更新日期/Last Update: 2017-01-20