scholarly journals HUMAN IDENTIFICATION BASED ON FOOT IMAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORK

Author(s):  
Mustafa Shiwaish Hameed

A human footprint is a human biometric system. Each of them has specific footprints. It can be used instead of password authentication in security systems such as user authentication for financial transactions. The password-based system cannot verify that the person entering the password is valid. Therefore, a biometric system is more secure than a password-based system. In this seminar, a new identification system using CNN was presented. In addition, a new foot image data set was created by collecting foot images from 150 people. The presented proposed method presented 100% accuracy when compared with previous studies.

2021 ◽  
Vol 2 ◽  
Author(s):  
Chengjie Li ◽  
Lidong Zhu ◽  
Zhongqiang Luo ◽  
Zhen Zhang ◽  
Yilun Liu ◽  
...  

In space-based AIS (Automatic Identification System), due to the high orbit and wide coverage of the satellite, there are many self-organizing communities within the observation range of the satellite, and the signals will inevitably conflict, which reduces the probability of ship detection. In this paper, to improve system processing power and security, according to the characteristics of neural network that can efficiently find the optimal solution of a problem, proposes a method that combines the problem of blind source separation with BP neural network, using the generated suitable data set to train the neural network, thereby automatically generating a traditional blind signal separation algorithm with a more stable separation effect. At last, through the simulation results of combining the blind source separation problem with BP neural network, the performance and stability of the space-based AIS can be effectively improved.


2020 ◽  
Vol 10 (11) ◽  
pp. 4010 ◽  
Author(s):  
Kwang-il Kim ◽  
Keon Myung Lee

Marine resources are valuable assets to be protected from illegal, unreported, and unregulated (IUU) fishing and overfishing. IUU and overfishing detections require the identification of fishing gears for the fishing ships in operation. This paper is concerned with automatically identifying fishing gears from AIS (automatic identification system)-based trajectory data of fishing ships. It proposes a deep learning-based fishing gear-type identification method in which the six fishing gear type groups are identified from AIS-based ship movement data and environmental data. The proposed method conducts preprocessing to handle different lengths of messaging intervals, missing messages, and contaminated messages for the trajectory data. For capturing complicated dynamic patterns in trajectories of fishing gear types, a sliding window-based data slicing method is used to generate the training data set. The proposed method uses a CNN (convolutional neural network)-based deep neural network model which consists of the feature extraction module and the prediction module. The feature extraction module contains two CNN submodules followed by a fully connected network. The prediction module is a fully connected network which suggests a putative fishing gear type for the features extracted by the feature extraction module from input trajectory data. The proposed CNN-based model has been trained and tested with a real trajectory data set of 1380 fishing ships collected over a year. A new performance index, DPI (total performance of the day-wise performance index) is proposed to compare the performance of gear type identification techniques. To compare the performance of the proposed model, SVM (support vector machine)-based models have been also developed. In the experiments, the trained CNN-based model showed 0.963 DPI, while the SVM models showed 0.814 DPI on average for the 24-h window. The high value of the DPI index indicates that the trained model is good at identifying the types of fishing gears.


2021 ◽  
Author(s):  
Masaki Ikuta

<div><div><div><p>Many algorithms and methods have been proposed for Computed Tomography (CT) image reconstruction, partic- ularly with the recent surge of interest in machine learning and deep learning methods. The majority of recently proposed methods are, however, limited to the image domain processing where deep learning is used to learn the mapping from a noisy image data set to a true image data set. While deep learning-based methods can produce higher quality images than conventional model-based post-processing algorithms, these methods have lim- itations. Deep learning-based methods used in the image domain are not sufficient for compensating for lost information during a forward and a backward projection in CT image reconstruction especially with a presence of high noise. In this paper, we propose a new Recurrent Neural Network (RNN) architecture for CT image reconstruction. We propose the Gated Momentum Unit (GMU) that has been extended from the Gated Recurrent Unit (GRU) but it is specifically designed for image processing inverse problems. This new RNN cell performs an iterative optimization with an accelerated convergence. The GMU has a few gates to regulate information flow where the gates decide to keep important long-term information and discard insignificant short- term detail. Besides, the GMU has a likelihood term and a prior term analogous to the Iterative Reconstruction (IR). This helps ensure estimated images are consistent with observation data while the prior term makes sure the likelihood term does not overfit each individual observation data. We conducted a synthetic image study along with a real CT image study to demonstrate this proposed method achieved the highest level of Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM). Also, we showed this algorithm converged faster than other well-known methods.</p></div></div></div>


Universe ◽  
2021 ◽  
Vol 7 (7) ◽  
pp. 211
Author(s):  
Xingzhu Wang ◽  
Jiyu Wei ◽  
Yang Liu ◽  
Jinhao Li ◽  
Zhen Zhang ◽  
...  

Recently, astronomy has witnessed great advancements in detectors and telescopes. Imaging data collected by these instruments are organized into very large datasets that form data-oriented astronomy. The imaging data contain many radio galaxies (RGs) that are interesting to astronomers. However, considering that the scale of astronomical databases in the information age is extremely large, a manual search of these galaxies is impractical given the need for manual labor. Therefore, the ability to detect specific types of galaxies largely depends on computer algorithms. Applying machine learning algorithms on large astronomical data sets can more effectively detect galaxies using photometric images. Astronomers are motivated to develop tools that can automatically analyze massive imaging data, including developing an automatic morphological detection of specified radio sources. Galaxy Zoo projects have generated great interest in visually classifying galaxy samples using CNNs. Banfield studied radio morphologies and host galaxies derived from visual inspection in the Radio Galaxy Zoo project. However, there are relatively more studies on galaxy classification, while there are fewer studies on galaxy detection. We develop a galaxy detection model, which realizes the location and classification of Fanaroff–Riley class I (FR I) and Fanaroff–Riley class II (FR II) galaxies. The field of target detection has also developed rapidly since the convolutional neural network was proposed. You Only Look Once: Unified, Real-Time Object Detection (YOLO) is a neural-network-based target detection model proposed by Redmon et al. We made several improvements to the detection effect of dense galaxies based on the original YOLOv5, mainly including the following. (1) We use Varifocal loss, whose function is to weigh positive and negative samples asymmetrically and highlight the main sample of positive samples in the training phase. (2) Our neural network model adds an attention mechanism for the convolution kernel so that the feature extraction network can adjust the size of the receptive field dynamically in deep convolutional neural networks. In this way, our model has good adaptability and effect for identifying galaxies of different sizes on the picture. (3) We use empirical practices suitable for small target detection, such as image segmentation and reducing the stride of the convolutional layers. Apart from the three major contributions and novel points of the model, the thesis also included different data sources, i.e., radio images and optical images, aiming at better classification performance and more accurate positioning. We used optical image data from SDSS, radio image data from FIRST, and label data from FR Is and FR IIs catalogs to create a data set of FR Is and FR IIs. Subsequently, we used the data set to train our improved YOLOv5 model and finally realize the automatic classification and detection of FR Is and FR IIs. Experimental results prove that our improved method achieves better performance. [email protected] of our model reaches 82.3%, and the location (Ra and Dec) of the galaxies can be identified more accurately. Our model has great astronomical significance. For example, it can help astronomers find FR I and FR II galaxies to build a larger-scale galaxy catalog. Our detection method can also be extended to other types of RGs. Thus, astronomers can locate the specific type of galaxies in a considerably shorter time and with minimum human intervention, or it can be combined with other observation data (spectrum and redshift) to explore other properties of the galaxies.


2020 ◽  
Vol 23 (6) ◽  
pp. 1155-1171
Author(s):  
Rodion Dmitrievich Gaskarov ◽  
Alexey Mikhailovich Biryukov ◽  
Alexey Fedorovich Nikonov ◽  
Daniil Vladislavovich Agniashvili ◽  
Danil Aydarovich Khayrislamov

Steel is one of the most important bulk materials these days. It is used almost everywhere - from medicine to industry. Detecting this material's defects is one of the most challenging problems for industries worldwide. This process is also manual and time-consuming. Through this study we tried to automate this process. A convolutional neural network model UNet was used for this task for more accurate segmentation with less training image data set for our model. The essence of this NN (neural network) is in step-by-step convolution of every image (encoding) and then stretching them to initial resolution, consequently getting a mask of an image with various classes on it. The foremost modification is changing an input image's size to 128x800 px resolution (original images in dataset are 256x1600 px) because of GPU memory size's limitation. Secondly, we used ResNet34 CNN (convolutional neural network) as encoder, which was pre-trained on ImageNet1000 dataset with modified output layer - it shows 4 layers instead of 34. After running tests of this model, we obtained 92.7% accuracy using images of hot-rolled steel sheets.


1992 ◽  
Vol 03 (02) ◽  
pp. 199-207
Author(s):  
Shahram Hejazi ◽  
Stephen M. Bauer ◽  
Robert A. Spangler

Thermal images of the human body, when obtained at different wavelengths of infrared radiation, offer a means of eliminating several sources of error which can occur in single-wavelength thermal imaging procedures. Algebraic treatment of image data, however, utilizing nonlinear functions derived from integration over Planck’s law of radiation distribution, proves to be extremely sensitive to experimental errors in measurement and truncation errors in computation. The neural network backpropagation algorithm has been applied to thermal data processing which has resulted in a more stable and error-tolerant method of data reduction. Trained on a computed ideal data set, this algorithm has been shown to more reliably process actual data.


2019 ◽  
Vol 5 (1) ◽  
pp. 231-234 ◽  
Author(s):  
Thomas Wittenberg ◽  
Pascal Zobel ◽  
Magnus Rathke ◽  
Steffen Mühldorfer

AbstractEarly detection of polyps is one central goal of colonoscopic screening programs. To support gastroenterologists during this examination process, deep convolutional neural network can be applied for computer-assisted detection of neoplastic lesions. In this work, a Mask R-CNN architecture was applied. For training and testing, three independent colonoscopy data sets were used, including 2484 HD labelled images with polyps from our clinic, as well as two public image data sets from the MICCAI 2015 polyp detection challenge, consisting of 612 SD and 194 HD labelled images with polyps. After training the deep neural network, best results for the three test data sets were achieved in the range of recall = 0.92, precision = 0.86, F1 = 0.89 (data set A), rec = 0.86, prec = 0.80, F1 = 0.82 (data set B) and rec = 0.83, prec = 0.74, F1 = 0.79 (data set C).


2011 ◽  
Vol 393-395 ◽  
pp. 205-208 ◽  
Author(s):  
Xue Mei Wang

Today the blind source separation (BSS) algorithms are widely used to separate independent components in a data set based on its statistical properties. Especially in image data applications, the independent component analysis (ICA) based BSS procedure for image pre-processing has been successfully applied for independent component extraction in order to remove the noise signals mixed into the image data. The contribution of this paper refers to the development of a nonlinear BSS method using the radial basis function (RBF) neural network based ICA algorithm, which was built by adopted some modifications in the linear ICA model. Moreover, genetic algorithm (GA) was used to optimize the RBF neural network to obtain satisfactory nonlinear solve of the nonlinear mixing matrix. In the experiments of this work, the GA optimized nonlinear ICA method and other ICA models were applied for image de-noising. A comparative analysis has showed satisfactory and effective image de-noising results obtained by the presented method.


2020 ◽  
Author(s):  
Debanjan Konar ◽  
Siddhartha Bhattacharyya ◽  
Tapan Kumar Gandhi ◽  
Bijaya Ketan Panigrahi ◽  
Richard Jiang

<div>This paper introduces a novel shallow self-supervised tensor neural network for volumetric segmentation of brain MR images obviating training or supervision. The proposed network is a 3D version of the Quantum-Inspired Self Supervised Neural Network (QIS-Net) architecture and is referred to as 3D Quantum-inspired Self-supervised Tensor Neural Network (3D-QNet). The underlying architecture of 3D-QNet is composed of a trinity of volumetric layers viz. input, intermediate and output layers inter-connected using a 26-connected third-order neighborhood-based topology for voxel-wise processing of 3D MR image data suitable for semantic segmentation. Each of the volumetric layers contains quantum neurons designated by qubits or quantum bits. The incorporation</div><div>of tensor decomposition in quantum formalism leads to faster convergence of the network operations to preclude the inherent slow convergence problems faced by the self-supervised networks. The segmented volumes are obtained once the network converges. The suggested 3D-QNet is tailored and tested on the BRATS 2019 data set extensively in the experiments carried out. 3D-QNet has achieved promising dice similarity while compared with the intensively supervised convolutional network-based models 3D-UNet, Vox-ResNet, DRINet, and 3D-ESPNet, thus facilitating annotation free semantic segmentation using a self-supervised shallow network.</div>


2020 ◽  
Vol 16 ◽  
pp. 227-232
Author(s):  
Rafał Sieczka ◽  
Maciej Pańczyk

Acquiring data for neural network training is an expensive and labour-intensive task, especially when such data isdifficult to access. This article proposes the use of 3D Blender graphics software as a tool to automatically generatesynthetic image data on the example of price labels. Using the fastai library, price label classifiers were trained ona set of synthetic data, which were compared with classifiers trained on a real data set. The comparison of the resultsshowed that it is possible to use Blender to generate synthetic data. This allows for a significant acceleration of thedata acquisition process and consequently, the learning process of neural networks.


Sign in / Sign up

Export Citation Format

Share Document