Improving Biometric Identification Performance Using PCANet Deep Learning and Multispectral Palmprint

Author(s):  
Abdallah Meraoumia ◽  
Farid Kadri ◽  
Hakim Bendjenna ◽  
Salim Chitroub ◽  
Ahmed Bouridane
Author(s):  
Weimeng Chu ◽  
Shunan Wu ◽  
Xiao He ◽  
Yufei Liu ◽  
Zhigang Wu

The identification accuracy of inertia tensor of combined spacecraft, which is composed by a servicing spacecraft and a captured target, could be easily affected by the measurement noise of angular rate. Due to frequently changing operating environments of combined spacecraft in space, the measurement noise of angular rate can be very complex. In this paper, an inertia tensor identification approach based on deep learning method is proposed to improve the ability of identifying inertia tensor of combined spacecraft in the presence of complex measurement noise. A deep neural network model for identification is constructed and trained by enough training data and a designed learning strategy. To verify the identification performance of the proposed deep neural network model, two testing set with different ranks of measure noises are used for simulation tests. Comparison tests are also delivered among the proposed deep neural network model, recursive least squares identification method, and tradition deep neural network model. The comparison results show that the proposed deep neural network model yields a more accurate and stable identification performance for inertia tensor of combined spacecraft in changeable and complex operating environments.


2019 ◽  
Vol 13 (2) ◽  
pp. 282-291 ◽  
Author(s):  
Dwaipayan Biswas ◽  
Luke Everson ◽  
Muqing Liu ◽  
Madhuri Panwar ◽  
Bram-Ernst Verhoef ◽  
...  

Author(s):  
Hervé Goëau ◽  
Pierre Bonnet ◽  
Alexis Joly

Automated plant identification has recently improved significantly due to advances in deep learning and the availability of large amounts of field photos. As an illustration, the classification accuracy of 10K species measured in the LifeCLEF challenge (Goëau et al. 2018) reached 90%, very close to that of human experts. However, the profusion of field images only concerns a few tens of thousands of species, mainly located in North America and Western Europe. Conversely, the richest regions in terms of biodiversity, such as tropical countries, suffer from a shortage of training data (Pitman 2021). Consequently, the identification performance of the most advanced models on the flora of these regions is much lower (Goëau et al. 2019). Nevertheless, for several centuries, botanists have systematically collected, catalogued, and stored plant specimens in herbaria. Considerable recent efforts by the biodiversity informatics community, such as DiSSCo (Addink et al. 2018) and iDigBio (Matsunaga et al. 2013), have made millions of digitized specimens from these collections available online. A key question is therefore whether these digitized specimens could be used to improve the identification performance of species for which we have very few (if any) photos. However, this is a very difficult problem from a machine learning point of view. The visual appearance of a herbarium specimen is actually very different from a field photograph because the specimens are dried and crushed on a herbarium sheet before being digitized (Fig. 1). To advance research on this topic, we built a large dataset that we shared as one of the challenges of the LifeCLEF 2020 (Goëau et al. 2020) and 2021 evaluation campaigns (Goëau et al. 2021). It includes more than 320K herbarium specimens collected mostly from the Guiana Shield and the Northern Amazon Rainforest, focusing on about 1K plant species of the French Guiana flora. A valuable asset of this collection is that some of the specimens are accompanied by a few photos of the same specimen, allowing for more precise machine learning. In addition to this training data, we also built a test set for model evaluation, composed of 3,186 field photos collected by two of the best experts on Guyanese flora. Based on this dataset, about ten research teams have developed deep learning methods to address the challenge (including the authors of this abstract as the organizing team). A detailed description of these methods can be found in the technical notes written by the participating teams (Goëau et al. 2020, Goëau et al. 2021). The methods can be divided into two categories: those based on classical convolutional neural networks (CNN) trained simply by mixing digitized specimens and photos and those based on advanced domain adaptation techniques with the objective of learning a joint representation space between field and herbarium representations. those based on classical convolutional neural networks (CNN) trained simply by mixing digitized specimens and photos and those based on advanced domain adaptation techniques with the objective of learning a joint representation space between field and herbarium representations. The domain adaptation methods themselves were of two types, those based on adversarial regularization (Motiian et al. 2017) to force herbarium specimens and photos to have the same representations, metric learning to maximize inter-species distances and minimize intra-species distances in the representation space adversarial regularization (Motiian et al. 2017) to force herbarium specimens and photos to have the same representations, metric learning to maximize inter-species distances and minimize intra-species distances in the representation space In Table 1, we report the results achieved by the different methods evaluated during the 2020 edition of the challenge. The evaluation metric used is the mean reciprocal rank (MRR), i.e., the average of the inverse of the rank of the correct species in the list of the predicted species. In addition to this main score, a second MRR score is computed on a subset of the test set composed of the most difficult species, i.e., the ones that are the least frequently photographed in the field. The main outcomes we can derive from these results are the following: Classical deep learning models fail to identify plant photos from digitized herbarium specimens. The best classical CNN trained on the provided data resulted in a very low MRR score (0.011). Even with the of use additional training data (e.g. photos and digitized herbarium from GBIF) the MRR score remains very low (0.039). Domain adaptation methods provide significant improvement but the task remains challenging. The best MRR score (0.180) was achieved by using adversarial regularization (FSDA Motiian et al. 2017). This is much better than the classical CNN models but there is still a lot of progress to be made to reach the performance of a truly functional identification system (the MRR score on classical plant identification tasks can be up to 0.9). No method fits all. As shown in Table 1, the metric learning method has a significantly better MRR score on the most difficult species (0.107). However, the performance of this method on the species with more photos is much lower than the adversarial technique. In 2021, the challenge was run again but with additional information provided to train the models, i.e., species traits (plant life form, woodiness and plant growth form). The use of the species traits allowed slight performance improvement of the best adversarial adaptation method (with a MRR equal to 0.198). In conclusion, the results of the experiments conducted are promising and demonstrate the potential interest of digitized herbarium data for automated plant identification. However, progress is still needed before integrating this type of approach into production applications.


Author(s):  
Prof. Jaychand Upadhyay ◽  
Prof. Tad Gonsalves ◽  
Rohan Paranjpe ◽  
Hiralal Purohit ◽  
Rohan Joshi

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4001 ◽  
Author(s):  
Jucheol Moon ◽  
Nelson Hebert Minaya ◽  
Nhat Anh Le ◽  
Hee-Chan Park ◽  
Sang-Il Choi

Gait is a characteristic that has been utilized for identifying individuals. As human gait information is now able to be captured by several types of devices, many studies have proposed biometric identification methods using gait information. As research continues, the performance of this technology in terms of identification accuracy has been improved by gathering information from multi-modal sensors. However, in past studies, gait information was collected using ancillary devices while the identification accuracy was not high enough for biometric identification. In this study, we propose a deep learning-based biometric model to identify people by their gait information collected through a wearable device, namely an insole. The identification accuracy of the proposed model when utilizing multi-modal sensing is over 99%.


Sign in / Sign up

Export Citation Format

Share Document