[1]鲍文霞,瞿金杰,王年,等.基于空间聚合加权卷积神经网络的力触觉足迹识别[J].东南大学学报(自然科学版),2020,50(5):959-964.[doi:10.3969/j.issn.1001-0505.2020.05.023]
 Bao Wenxia,Qu Jinjie,Wang Nian,et al.Force-tactile footprint recognition based on spatial aggregation weighted convolutional neural network[J].Journal of Southeast University (Natural Science Edition),2020,50(5):959-964.[doi:10.3969/j.issn.1001-0505.2020.05.023]
点击复制

基于空间聚合加权卷积神经网络的力触觉足迹识别()
分享到:

《东南大学学报(自然科学版)》[ISSN:1001-0505/CN:32-1178/N]

卷:
50
期数:
2020年第5期
页码:
959-964
栏目:
自动化
出版日期:
2020-09-20

文章信息/Info

Title:
Force-tactile footprint recognition based on spatial aggregation weighted convolutional neural network
作者:
鲍文霞1瞿金杰1王年1唐俊1鲁玺龙2
1安徽大学电子信息工程学院, 合肥 230601; 2公安部物证鉴定中心, 北京 100038
Author(s):
Bao Wenxia1 Qu Jinjie1 Wang Nian1 Tang Jun1 Lu Xilong2
1School of Electronic Information Engineering, Anhui University, Hefei 230601, China
2Institute of Forensic Science of China, Ministry of Public Security, Beijing 100038, China
关键词:
力触觉 足迹识别 空间聚合加权 VGG19卷积神经网络
Keywords:
force tactile footprint recognition spatial aggregation weighting VGG19 convolutional neural network
分类号:
TP183
DOI:
10.3969/j.issn.1001-0505.2020.05.023
摘要:
为了提高力触觉足迹识别的准确率,提出一种基于空间聚合加权注意力机制的足迹识别算法.首先,采用压力足迹采集器采集并构建一个包含100人2 000幅力触觉足迹图像的数据集.然后,采用VGG19卷积神经网络预训练模型提取特征,为获取特征图中足迹压力分布感兴趣区域,设计一种空间聚合加权模块(SAWM),该模块专注高响应区域从而提取足迹中显著区域局部特征,并与输入特征图加权融合,保留显著性特征,抑制不重要特征;最后输出的特征经过平均池化在全连接层实现力触觉足迹的识别. 试验结果表明,所提算法准确率达到了91.20%,优于其他注意力机制算法以及传统的足迹识别算法. 采用空间聚合加权注意力机制网络模型能够有效进行足迹识别,为身份识别提供技术支撑.
Abstract:
To improve the accuracy of force tactile footprint recognition, a footprint recognition algorithm based on spatial aggregation weighted attention mechanism was proposed. First, a pressure footprint collector was used to collect and construct a data set containing 2 000 force tactile footprint images of 100 people. Then, the VGG19 convolutional neural network pre-training model was used to extract features. To obtain the interest region of the footprint pressure distribution in the feature map, a spatial aggregation weighting module(SAWM)was designed to focus on the high response region and extract the local features of the salient region in the footprint, and then the local features were integrated with the input feature map to preserve the salient features and suppress the unimportant features. Finally, the output features were averagely pooled, and the force tactile footprint was recognized in the fully connected layer. Experimental results show that the accuracy of the proposed algorithm reaches 91.20%, it is better than other attention mechanism algorithms and traditional footprint recognition algorithms. The spatial aggregation weighted attention mechanism network model can effectively perform the footprint recognition and provide a technical support for the identity recognition.

参考文献/References:

[1] Gurney J K,Kersting U G,Rosenbaum D.Between-day reliability of repeated plantar pressure distribution measurements in a normal population[J].Gait & Posture,2008,27(4):706-709.DOI:10.1016/j.gaitpost.2007.07.002.
[2] Feng Z.Biometric identification technology and development trend of physiological characteristics[J].Journal of Physics:Conference Series,2018,1060:012047.DOI:10.1088/1742-6596/1060/1/012047.
[3] Mukhra R,Krishan K,Kanchan T.Bare footprint metric analysis methods for comparison and identification in forensic examinations:A review of literature[J].Journal of Forensic and Legal Medicine,2018,58:101-112.DOI:10.1016/j.jflm.2018.05.006.
[4] Nirenberg M S,Ansert E,Krishan K,et al.Two-dimensional metric comparison between dynamic bare and sock-clad footprints for its forensic implications—A pilot study[J].Science & Justice,2019,59(1):46-51.DOI:10.1016/j.scijus.2018.09.001.
[5] Nguyen D P,Phan C B,Koo S.Predicting body movements for person identification under different walking conditions[J].Forensic Science International,2018,290:303-309.DOI:10.1016/j.forsciint.2018.07.022.
[6] Kulkarni P S,Kulkarni V B.Human footprint classification using image parameters[C]//2015 International Conference on Pervasive Computing(ICPC).Pune,India,2015:1-5.DOI:10.1109/PERVASIVE.2015.7087011.
[7] Osisanwo F Y,Adetunmbi A O,Álese B K.Barefoot morphology:A person unique feature for forensic identification[C]//The 9th International Conference for Internet Technology and Secured Transactions.London,UK,2014:356-359.DOI:10.1109/ICITST.2014.7038837.
[8] Khokher R,Singh R C,Kumar R.Footprint recognition with principal component analysis and independent component analysis[J].Macromolecular Symposia,2015,347(1):16-26.DOI:10.1002/masy.201400045.
[9] Heydarzadeh M,Birjandtalab J,Pouyan M B,et al.Gaits analysis using pressure image for subject identification[C]//2017 IEEE EMBS International Conference on Biomedical & Health Informatics.Orlando,FL,USA,2017:333-336.DOI:10.1109/BHI.2017.7897273.
[10] Wang X N,Wang H Y,Cheng Q,et al.Single 2D pressure footprint based person identification[C]//2017 IEEE International Joint Conference on Biometrics.Denver,CO,USA,2017:413-419.DOI:10.1109/BTAS.2017.8272725.
[11] Simonyan K,Zisserman A.Very deep convolutional networks for large-scale image recognition[EB/OL].(2014-09-04)[2020-05-18].https://arxiv.org/abs/1409.1556.
[12] Lin T Y,RoyChowdhury A,Maji S.Bilinear CNN models for fine-grained visual recognition[C]//2015 IEEE International Conference on Computer Vision.Santiago,Chile,2015:1449-1457.DOI:10.1109/ICCV.2015.170.
[13] Hu J,Shen L,Sun G.Squeeze-and-excitation networks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA,2018:7132-7141.DOI:10.1109/CVPR.2018.00745.
[14] Woo S,Park J,Lee J Y,et al.CBAM:Convolutional block attention module[M]//Computer Vision-ECCV 2018.Cham:Springer International Publishing,2018:3-19.DOI:10.1007/978-3-030-01234-2_1.
[15] Sun S L,Liu Y H,Mao L.Multi-view learning for visual violence recognition with maximum entropy discrimination and deep features[J].Information Fusion,2019,50:43-53.DOI:10.1016/j.inffus.2018.10.004.
[16] Miao H C,Wang J H,Zhang Q Y.Feature recognition of concave-convex shape based on single image[C]//2018 11th International Conference on Intelligent Computation Technology and Automation(ICICTA).Changsha,China,2018:106-110.DOI:10.1109/ICICTA.2018.00032.
[17] Monisha S J,Sheeba G M.Gait based authentication with hog feature extraction[C]//2018 Second International Conference on Inventive Communication and Computational Technologies(ICICCT).Coimbatore,India,2018:1478-1483.DOI:10.1109/ICICCT.2018.8473007.
[18] Mohan K,Chandrasekhar P,Jilani S A K.A combined HOG-LPQ with Fuz-SVM classifier for Object face Liveness Detection[C]//2017 International Conference on I-SMAC(IoT in Social,Mobile,Analytics and Cloud)(I-SMAC).Palladam,India,2017:531-537.DOI:10.1109/I-SMAC.2017.8058406.

备注/Memo

备注/Memo:
收稿日期: 2020-05-26.
作者简介: 鲍文霞(1980—),女,博士,副教授;王年(联系人),男,博士,教授,博士生导师,wn_xlb@ahu.edu.cn.
基金项目: 国家重点研发计划资助项目(2018YFC0807302)、国家自然科学基金资助项目(61772032)、科技强警基础工作专项资助项目(2018GABJC15).
引用本文: 鲍文霞,瞿金杰,王年,等.基于空间聚合加权卷积神经网络的力触觉足迹识别[J].东南大学学报(自然科学版),2020,50(5):959-964. DOI:10.3969/j.issn.1001-0505.2020.05.023.
更新日期/Last Update: 2020-09-20