[1]胡斐,罗立民,刘佳,等.基于时空兴趣点和主题模型的动作识别[J].东南大学学报(自然科学版),2011,41(5):962-966.[doi:10.3969/j.issn.1001-0505.2011.05.013]
 Hu Fei,Luo Limin,Liu Jia,et al.Action recognition based on space-time interest points and topic model[J].Journal of Southeast University (Natural Science Edition),2011,41(5):962-966.[doi:10.3969/j.issn.1001-0505.2011.05.013]
点击复制

基于时空兴趣点和主题模型的动作识别()
分享到:

《东南大学学报(自然科学版)》[ISSN:1001-0505/CN:32-1178/N]

卷:
41
期数:
2011年第5期
页码:
962-966
栏目:
计算机科学与工程
出版日期:
2011-09-20

文章信息/Info

Title:
Action recognition based on space-time interest points and topic model
作者:
胡斐12罗立民1刘佳3左欣1
(1 东南大学计算机科学与工程学院,南京 210096)
(2 武警江西省总队司令部,南昌 330025)
(3 上海交通大学图像处理与模式识别研究所,上海 200240)
Author(s):
Hu Fei12Luo Limin1Liu Jia3Zuo Xin1
(1 School of Computer Science and Engineering, Southeast University, Nanjing 210096, China)
(2 Armed Police Command of Jiangxi, Nanchang 330025, China)
(3 Institute of Image Processing and Recognition, Shanghai Jiaotong University, Shanghai 200240, China)
关键词:
动作识别时空兴趣点主题模型词袋
Keywords:
action recognition space-time interest points topic model bag of words
分类号:
TP391.4;TN911.7
DOI:
10.3969/j.issn.1001-0505.2011.05.013
摘要:
提出了一种新的基于概率主题模型的人体动作识别方法.该方法利用局部的时空兴趣点特征,采用词袋 (bag of words)的方法对跑、跳、挥手等几种常见的动作进行表示.利用概率主题模型,使视频的动作类别标记对应于概率模型中的隐含变量,通过对隐变量的推断,实现对整个视频的动作分类.该算法还可以将每个兴趣点划分为不同的动作类别,实现了对较复杂情况下动作的识别.算法识别速度快,准确率高,在实际应用中取得了较好的效果.大量实验表明,该算法对动作的识别正确率在85%以上.
Abstract:
A novel human action recognition method based on a probabilistic topic model is proposed. The method uses a bag of words to describe several human common actions including running, jumping, handwaving etc. by extracting local features of space-time interest points. Using the probabilistic topic model can make the action category tag correspond to latent variables, and the inference for latent variables can be used to categorize the entire video actions. In addition, the algorithm can recognize multiple actions by dividing each interest point into different action categories. The method is fast and accurate, and achieves good results in practice. Large numbers of experiments show that the recognition rate of the algorithm is more than 85%.

参考文献/References:

[1] Laptev I.On space-time interest points[J].International Journal of Computer Vision,2005,64(2/3):107-123.
[2] Dollar P,Rabaud V,Cottrell G,et al.Behavior recognition via sparse spatio-temporal features[C]//Proceedings of 2nd Joint IEEE International Workshop on VS-PETS.Beijing,China,2005:65-72.
[3] Niebles J,Wang Hongcheng,Li Feifei.Unsupervised learning of human action categories using spatial-temporal words[J].International Journal of Computer Vision,2008,79(3):299-318.
[4] Wang Yang,Mori G.Human action recognition by semilatent topic models[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(10):1762-1774.
[5] Blank M,Gorelick L,Shechtman E,et al.Actions as space-time shapes[C]//Proceedings of the 10th IEEE International Conference on Computer Vision.Beijing,China,2005,2:1395-1402.
[6] Scovanner P,Ali S,Shah M.A 3-dimensional shift descriptor and its application to action recognition[C]//Proceedings of the 15th ACM International Conference on Multimedia.Augsburg,Bavaria,Germany,2007:357-360.
[7] Blei D M,Ng A Y,Jordan M I.Latent Dirichlet allocation[J].Journal of Machine Learning Research,2003,3(4/5):993-1022.
[8] Schuldt C,Laptev I,Caputo B.Recognizing human actions:a local SVM approach[C]//Proceedings of the 17th International Conference on Pattern Recognition.Cambridge,UK,2004,3:32-36.
[9] Dhillon P S,Nowozin S,Lampert C H.Combining appearance and motion for human action classification in videos[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition Workshops.Miami,FL,USA,2009:22-29.
[10] Liu J Q,Ali S,Shah M.Recognizing human actions using multiple features[C]//2008 IEEE Conference on Computer Vision and Pattern Recognition.Anchorage,AK,USA,2008:4587527.

相似文献/References:

[1]王健弘,张旭,章品正,等.基于时空信息和非负成分表示的动作识别[J].东南大学学报(自然科学版),2016,46(4):675.[doi:10.3969/j.issn.1001-0505.2016.04.001]
 Wang Jianhong,Zhang Xu,Zhang Pinzheng,et al.Action recognition based on spatio-temporal information and nonnegative component representation[J].Journal of Southeast University (Natural Science Edition),2016,46(5):675.[doi:10.3969/j.issn.1001-0505.2016.04.001]
[2]李亚玮,金立左,孙长银,等.基于光流约束自编码器的动作识别[J].东南大学学报(自然科学版),2017,47(4):691.[doi:10.3969/j.issn.1001-0505.2017.04.011]
 Li Yawei,Jin Lizuo,Sun Changyin,et al.Action recognition based on optical flow constrained auto-encoder[J].Journal of Southeast University (Natural Science Edition),2017,47(5):691.[doi:10.3969/j.issn.1001-0505.2017.04.011]

备注/Memo

备注/Memo:
作者简介:胡斐(1983—),男,博士生;罗立民(联系人),男,博士,教授,博士生导师,luo.list@seu.edu.cn.
引文格式: 胡斐,罗立民,刘佳,等.基于时空兴趣点和主题模型的动作识别[J].东南大学学报:自然科学版,2011,41(5):962-966.[doi:10.3969/j.issn.1001-0505.2011.05.013]
更新日期/Last Update: 2011-09-20