[1]程旭,周琳,张毅锋.基于多损失的生成式对抗目标跟踪算法[J].东南大学学报(自然科学版),2018,48(3):400-405.[doi:10.3969/j.issn.1001-0505.2018.03.004]
 Cheng Xu,Zhou Lin,Zhang Yifeng.Object tracking algorithm based on multiple loss generative adversary[J].Journal of Southeast University (Natural Science Edition),2018,48(3):400-405.[doi:10.3969/j.issn.1001-0505.2018.03.004]
点击复制

基于多损失的生成式对抗目标跟踪算法()
分享到:

《东南大学学报(自然科学版)》[ISSN:1001-0505/CN:32-1178/N]

卷:
48
期数:
2018年第3期
页码:
400-405
栏目:
计算机科学与工程
出版日期:
2018-05-20

文章信息/Info

Title:
Object tracking algorithm based on multiple loss generative adversary
作者:
程旭12周琳1张毅锋13
1东南大学信息科学与工程学院, 南京 210096; 2中船重工鹏力(南京)智能装备系统有限公司, 南京 210003; 3南京大学计算机软件新技术国家重点实验室, 南京 210023
Author(s):
Cheng Xu12 Zhou Lin1 Zhang Yifeng13
1School of Information Science and Engineering, Southeast university, Nanjing 210096, China
2CSIC Pride(Nanjing)Intelligent Equipment System Co., Ltd., Nanjing 210003, China
3State Key Laboratory of Novel Software Technology, Nanjing University, Nanjing 210023, China
关键词:
深度学习 生成式对抗网络 目标跟踪 损失函数
Keywords:
deep learning generative adversarial nets object tracking loss function
分类号:
TP391
DOI:
10.3969/j.issn.1001-0505.2018.03.004
摘要:
为了降低目标遭受遮挡、光照变化、运动模糊等外界挑战性因素造成跟踪失败的风险,提出了一种基于多损失生成式对抗网络的目标跟踪算法.在生成式对抗网络中,针对遮挡、光照变化和运动模糊3种挑战性场景设计了相应的解码器结构,编码器则共享网络参数.跟踪时,编码器作为特征提取器来提取目标的特征,在粒子滤波框架下完成对目标的定位.从内容损失、类内损失和身份保留损失方面定义了损失函数,将对抗训练得到的先验知识和目标固有知识相结合,重构清晰图像.实验结果表明,在遮挡、光照变化和运动模糊情况下,所提跟踪算法取得了良好的跟踪性能.
Abstract:
To reduce the risk of the tracking failure caused by the external challenging factors such as occlusion, illumination change and motion blur, an object tracking algorithm based on multiple loss generative adversary nets is proposed. In the generative adversarial nets, the decoder structures are designed according to three challenging scenes of occlusion, illumination change and motion blur. The encoder shares the network parameters. During the tracking, the encoder is regarded as the feature extractor to obtain the feature of the object. The localization of the object is decided in the particle filter framework. The loss functions about the content loss, the intra class loss and the identity preserving loss are defined. The prior knowledge from adversarial training and the object inherent knowledge are combined to reconstruct the clear image. The experimental results demonstrate that the proposed algorithm achieves good tracking performance under the circumstance of occlusion, illumination change and motion blur.

参考文献/References:

[1] Li X, Hu W, Shen C, et al. A survey of appearance models in visual object tracking [J]. ACM Transactions on Intelligent Systems and Technology, 2013, 4(4): 478-488.DOI:10.1145/2508037.2508039.
[2] Ross D A, Lim J, Lin R S, et al. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision, 2008, 77(1): 125-141.DOI:10.1007/s11263-007-0075-7.
[3] Kwon J, Lee K M. Visual tracking decomposition [C]//2010 IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, CA, USA, 2010: 1269-1276.DOI:10.1109/cvpr.2010.5539821.
[4] Avidan S. Ensemble tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(2): 261-271.DOI:10.1109/TPAMI.2007.35.
[5] Babenko B, Yang M H, Belongie S. Visual tracking with online multiple instance learning [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632.DOI:10.1109/TPAMI.2010.226.
[6] Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422. DOI:10.1109/TPAMI.2011.239.
[7] Zhang J, Ma S,Sclaroff S. MEEM: Robust tracking via multiple experts using entropy minimization [C]//2014 Proceedings of European Conference on Computer Vision. Zurich, Switzerland, 2014: 188-203.DOI:10.1007/978-3-319-10599-4_13.
[8] Zhong W, Lu H, Yang M H. Robust object tracking via sparse collaborative appearance model [J]. IEEE Transactions on Image Processing, 2014, 23(5): 2356-2368.DOI:10.1109/TIP.2014.2313227.
[9] Wang N,Yeung D Y. Learning a deep compact image representation for visual tracking [C]//2013 Advances in Neural Information Processing Systems. Lake Tahoe, CA,USA, 2013: 809-817.
[10] Wang N, Li S, Gupta A, et al. Transferring rich feature hierarchies for robust visual tracking [EB/OL].(2015-04-23)[2016-02-19]. https://arxiv.org/abs/1501.04587.
[11] Nam H, Han B. Learning multi-domain convolutional neural networks for visual tracking [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, CA,USA, 2016: 4293-4302. DOI:10.1109/cvpr.2016.465.
[12] Tao R, Gavves E, Smeulders A W M. Siamese instance search for tracking [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, CA,USA, 2016: 1420-1429. DOI:10.1109/cvpr.2016.158.
[13] Zhang K, Liu Q, Wu Y, et al. Robust visual tracking via convolutional networks without training [J]. IEEE Transactions on Image Processing, 2016, 25(4): 1779-1792. DOI: 10.1109/TIP.2016.2531283.

备注/Memo

备注/Memo:
收稿日期: 2017-10-25.
作者简介: 程旭(1983—),男,博士; 张毅锋(联系人),男,博士,副教授,yfz@seu.edu.cn.
基金项目: 国家自然科学基金资助项目(61571106)、江苏省自然科学基金资助项目(BK20151102)、南京大学计算机软件新技术国家重点实验室开放课题资助项目(KFKT2017B17).
引用本文: 程旭,周琳,张毅锋,等.基于多损失的生成式对抗目标跟踪算法[J].东南大学学报(自然科学版),2018,48(3):400-405. DOI:10.3969/j.issn.1001-0505.2018.03.004.
更新日期/Last Update: 2018-05-20