The Development of an Identification Photo Booth System based on a Deep Learning Automatic Image Capturing Method

Author(s):  
Yu-Xiang Zhao ◽  
Yi-Zeng Hsieh ◽  
Shih-Syun Lin

With advances in technology, photo booths equipped with automatic capturing systems have gradually replaced the identification (ID) photo service provided by photography studios, thereby enabling consumers to save a considerable amount of time and money. Common automatic capturing systems employ text and voice instructions to guide users in capturing their ID photos; however, the capturing results may not conform to ID photo specifications. To address this issue, this study proposes an ID photo capturing algorithm that can automatically detect facial contours and adjust the size of captured images. The authors adopted a deep learning method (You Only Look Once) to detect the face and applied a semi-automatic annotation technique of facial landmarks to find the lip and chin regions from the facial region. In the experiments, subjects were seated at various distances and heights for testing the performance of the proposed algorithm. The experimental results show that the proposed algorithm can effectively and accurately capture ID photos that satisfy the required specifications.

2020 ◽  
Vol 32 ◽  
pp. 03011
Author(s):  
Divya Kapil ◽  
Aishwarya Kamtam ◽  
Akhil Kedare ◽  
Smita Bharne

Surveillance systems are used for the monitoring the activities directly or indirectly. Most of the surveillance system uses the face recognition techniques to monitor the activities. This system builds the automated contemporary biometric surveillance system based on deep learning. The application of the system can be used in various ways. The face prints of the persons will be stored inside the database with relevant statistics and does the face recognition. When any unknown face is recognized then alarm will ring so one can alert the security systems and in addition actions will be taken. The system learns changes while detecting faces automatically using deep learning and gain correct accuracy in face recognition. A deep learning method including Convolutional Neural Network (CNN) is having great significance in the area of image processing. This system can be applicable to monitor the activities for the housing society premises.


2019 ◽  
Vol 9 (18) ◽  
pp. 3935 ◽  
Author(s):  
Kazushige Okayasu ◽  
Kota Yoshida ◽  
Masataka Fuchida ◽  
Akio Nakamura

This study aims to propose a vision-based method to classify mosquito species. To investigate the efficiency of the method, we compared two different classification methods: The handcraft feature-based conventional method and the convolutional neural network-based deep learning method. For the conventional method, 12 types of features were adopted for handcraft feature extraction, while a support vector machine method was adopted for classification. For the deep learning method, three types of architectures were adopted for classification. We built a mosquito image dataset, which included 14,400 images with three types of mosquito species. The dataset comprised 12,000 images for training, 1500 images for testing, and 900 images for validating. Experimental results revealed that the accuracy of the conventional method using the scale-invariant feature transform algorithm was 82.4% at maximum, whereas the accuracy of the deep learning method was 95.5% in a residual network using data augmentation. From the experimental results, deep learning can be considered to be effective for classifying the mosquito species of the proposed dataset. Furthermore, data augmentation improves the accuracy of mosquito species’ classification.


2020 ◽  
Author(s):  
Ching Tarn ◽  
Wen-Feng Zeng ◽  
Zhengcong Fei ◽  
Si-Min He

AbstractSpectrum prediction using deep learning has attracted a lot of attention in recent years. Although existing deep learning methods have dramatically increased the prediction accuracy, there is still considerable space for improvement, which is presently limited by the difference of fragmentation types or instrument settings. In this work, we use the few-shot learning method to fit the data online to make up for the shortcoming. The method is evaluated using ten datasets, where the instruments includes Velos, QE, Lumos, and Sciex, with collision energies being differently set. Experimental results show that few-shot learning can achieve higher prediction accuracy with almost negligible computing resources. For example, on the dataset from a untrained instrument Sciex-6600, within about 10 seconds, the prediction accuracy is increased from 69.7% to 86.4%; on the CID (collision-induced dissociation) dataset, the prediction accuracy of the model trained by HCD (higher energy collision dissociation) spectra is increased from 48.0% to 83.9%. It is also shown that, the method is not critical to data quality and is sufficiently efficient to fill the accuracy gap. The source code of pDeep3 is available at http://pfind.ict.ac.cn/software/pdeep3.


2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Huanyao Zhang ◽  
Danqing Hu ◽  
Huilong Duan ◽  
Shaolei Li ◽  
Nan Wu ◽  
...  

Abstract Background Computed tomography (CT) reports record a large volume of valuable information about patients’ conditions and the interpretations of radiology images from radiologists, which can be used for clinical decision-making and further academic study. However, the free-text nature of clinical reports is a critical barrier to use this data more effectively. In this study, we investigate a novel deep learning method to extract entities from Chinese CT reports for lung cancer screening and TNM staging. Methods The proposed approach presents a new named entity recognition algorithm, namely the BERT-based-BiLSTM-Transformer network (BERT-BTN) with pre-training, to extract clinical entities for lung cancer screening and staging. Specifically, instead of traditional word embedding methods, BERT is applied to learn the deep semantic representations of characters. Following the long short-term memory layer, a Transformer layer is added to capture the global dependencies between characters. Besides, pre-training technique is employed to alleviate the problem of insufficient labeled data. Results We verify the effectiveness of the proposed approach on a clinical dataset containing 359 CT reports collected from the Department of Thoracic Surgery II of Peking University Cancer Hospital. The experimental results show that the proposed approach achieves an 85.96% macro-F1 score under exact match scheme, which improves the performance by 1.38%, 1.84%, 3.81%,4.29%,5.12%,5.29% and 8.84% compared to BERT-BTN, BERT-LSTM, BERT-fine-tune, BERT-Transformer, FastText-BTN, FastText-BiLSTM and FastText-Transformer, respectively. Conclusions In this study, we developed a novel deep learning method, i.e., BERT-BTN with pre-training, to extract the clinical entities from Chinese CT reports. The experimental results indicate that the proposed approach can efficiently recognize various clinical entities about lung cancer screening and staging, which shows the potential for further clinical decision-making and academic research.


Face recognition plays a vital role in security purpose. In recent years, the researchers have focused on the pose illumination, face recognition, etc,. The traditional methods of face recognition focus on Open CV’s fisher faces which results in analyzing the face expressions and attributes. Deep learning method used in this proposed system is Convolutional Neural Network (CNN). Proposed work includes the following modules: [1] Face Detection [2] Gender Recognition [3] Age Prediction. Thus the results obtained from this work prove that real time age and gender detection using CNN provides better accuracy results compared to other existing approaches.


2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


2021 ◽  
Author(s):  
Francesco Banterle ◽  
Rui Gong ◽  
Massimiliano Corsini ◽  
Fabio Ganovelli ◽  
Luc Van Gool ◽  
...  

Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


Sign in / Sign up

Export Citation Format

Share Document