Criminal face identification system using deep learning algorithm multi-task cascade neural network (MTCNN)

Author(s):  
K. Kranthi Kumar ◽  
Y. Kasiviswanadham ◽  
D.V.S.N.V. Indira ◽  
Pushpa Priyanka palesetti ◽  
Ch.V. Bhargavi
Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 652 ◽  
Author(s):  
Carlo Augusto Mallio ◽  
Andrea Napolitano ◽  
Gennaro Castiello ◽  
Francesco Maria Giordano ◽  
Pasquale D'Alessio ◽  
...  

Background: Coronavirus disease 2019 (COVID-19) pneumonia and immune checkpoint inhibitor (ICI) therapy-related pneumonitis share common features. The aim of this study was to determine on chest computed tomography (CT) images whether a deep convolutional neural network algorithm is able to solve the challenge of differential diagnosis between COVID-19 pneumonia and ICI therapy-related pneumonitis. Methods: We enrolled three groups: a pneumonia-free group (n = 30), a COVID-19 group (n = 34), and a group of patients with ICI therapy-related pneumonitis (n = 21). Computed tomography images were analyzed with an artificial intelligence (AI) algorithm based on a deep convolutional neural network structure. Statistical analysis included the Mann–Whitney U test (significance threshold at p < 0.05) and the receiver operating characteristic curve (ROC curve). Results: The algorithm showed low specificity in distinguishing COVID-19 from ICI therapy-related pneumonitis (sensitivity 97.1%, specificity 14.3%, area under the curve (AUC) = 0.62). ICI therapy-related pneumonitis was identified by the AI when compared to pneumonia-free controls (sensitivity = 85.7%, specificity 100%, AUC = 0.97). Conclusions: The deep learning algorithm is not able to distinguish between COVID-19 pneumonia and ICI therapy-related pneumonitis. Awareness must be increased among clinicians about imaging similarities between COVID-19 and ICI therapy-related pneumonitis. ICI therapy-related pneumonitis can be applied as a challenge population for cross-validation to test the robustness of AI models used to analyze interstitial pneumonias of variable etiology.


Author(s):  
Wenjing She

In this research, Dunhuang murals is taken as the object of restoration, and the role of digital repair combined with deep learning algorithm in mural restoration is explored. First, the image restoration technology is described, as well as its advantages and disadvantages are analyzed. Second, the deep learning algorithm based on artificial neural network is described and analyzed. Finally, the deep learning algorithm is integrated into the digital repair technology, and a mural restoration method based on the generalized regression neural network is proposed. The morphological expansion method and anisotropic diffusion method are used to preprocess the image. The MATLAB software is used for the simulation analysis and evaluation of the image restoration effect. The results show that in the restoration of the original image, the accuracy of the digital image restoration technology is not high. The nontexture restoration technology is not applicable in the repair of large-scale texture areas. The predicted value of the mural restoration effect based on the generalized neural network is closer to the true value. The anisotropic diffusion method has a significant effect on the processing of image noise. In the image similarity rate, the different number of training samples and smoothing parameters are compared and analyzed. It is found that when the value of δ is small, the number of training samples should be increased to improve the accuracy of the prediction value. If the number of training samples is small, a larger value of δ is needed to get a better prediction effect, and the best restoration effect is obtained for the restored image. Through this study, it is found that this study has a good effect on the restoration model of Dunhuang murals. It provides experimental reference for the restoration of later murals.


2018 ◽  
Vol 7 (3.34) ◽  
pp. 237
Author(s):  
R Aswini Priyanka ◽  
C Ashwitha ◽  
R Arun Chakravarthi ◽  
R Prakash

In scientific world, Face recognition becomes an important research topic. The face identification system is an application capable of verifying a human face from a live videos or digital images. One of the best methods is to compare the particular facial attributes of a person with the images and its database. It is widely used in biometrics and security systems. Back in old days, face identification was a challenging concept. Because of the variations in viewpoint and facial expression, the deep learning neural network came into the technology stack it’s been very easy to detect and recognize the faces. The efficiency has increased dramatically. In this paper, ORL database is about the ten images of forty people helps to evaluate our methodology. We use the concept of Back Propagation Neural Network (BPNN) in deep learning model is to recognize the faces and increase the efficiency of the model compared to previously existing face recognition models.   


Author(s):  
Yina Wu ◽  
Mohamed Abdel-Aty ◽  
Ou Zheng ◽  
Qing Cai ◽  
Shile Zhang

This paper presents an automated traffic safety diagnostics solution named “Automated Roadway Conflict Identification System” (ARCIS) that uses deep learning techniques to process traffic videos collected by unmanned aerial vehicle (UAV). Mask region convolutional neural network (R-CNN) is employed to improve detection of vehicles in UAV videos. The detected vehicles are tracked by a channel and spatial reliability tracking algorithm, and vehicle trajectories are generated based on the tracking algorithm. Missing vehicles can be identified and tracked by identifying stationary vehicles and comparing intersect of union (IOU) between the detection results and the tracking results. Rotated bounding rectangles based on the pixel-to-pixel manner masks that are generated by mask R-CNN detection are introduced to obtain precise vehicle size and location data. Based on the vehicle trajectories, post-encroachment time (PET) is calculated for each conflict event at the pixel level. By comparing the PET values and the threshold, conflicts with the corresponding pixels in which the conflicts happened can be reported. Various conflict types: rear-end, head on, sideswipe, and angle, can also be determined. A case study at a typical signalized intersection is presented; the results indicate that the proposed framework could significantly improve the accuracy of the output data. Moreover, safety diagnostics for the studied intersection are conducted by calculating the PET values for each conflict event. It is expected that the proposed detection and tracking method with UAVs could help diagnose road safety problems efficiently and appropriate countermeasures could then be proposed.


2020 ◽  
Vol 17 (8) ◽  
pp. 3328-3332
Author(s):  
S. Gowri ◽  
U. Srija ◽  
P. A. Shirley Divya ◽  
J. Jabez ◽  
J. S. Vimali

Classifying and predicting the Mangrove species is one of the most important applications in our ecosystem. Mangroves are the most endangered species that contributes in playing a greater role in our ecosystem. It mainly prevents the calamities like soil erosion, Tsunami, storms, wind turbulence, etc. These Mangroves has to be afforested and conserved in order to maintain a healthy ecosystem. To attain this the study of mangrove is to be done first. To classify the mangroves in its habitat, we use an algorithm from Deep Neural Network.


2021 ◽  
Vol 8 (3) ◽  
pp. 619
Author(s):  
Candra Dewi ◽  
Andri Santoso ◽  
Indriati Indriati ◽  
Nadia Artha Dewi ◽  
Yoke Kusuma Arbawa

<p>Semakin meningkatnya jumlah penderita diabetes menjadi salah satu faktor penyebab semakin tingginya penderita penyakit <em>diabetic retinophaty</em>. Salah satu citra yang digunakan oleh dokter mata untuk mengidentifikasi <em>diabetic retinophaty</em> adalah foto retina. Dalam penelitian ini dilakukan pengenalan penyakit diabetic retinophaty secara otomatis menggunakan citra <em>fundus</em> retina dan algoritme <em>Convolutional Neural Network</em> (CNN) yang merupakan variasi dari algoritme Deep Learning. Kendala yang ditemukan dalam proses pengenalan adalah warna retina yang cenderung merah kekuningan sehingga ruang warna RGB tidak menghasilkan akurasi yang optimal. Oleh karena itu, dalam penelitian ini dilakukan pengujian pada berbagai ruang warna untuk mendapatkan hasil yang lebih baik. Dari hasil uji coba menggunakan 1000 data pada ruang warna RGB, HSI, YUV dan L*a*b* memberikan hasil yang kurang optimal pada data seimbang dimana akurasi terbaik masih dibawah 50%. Namun pada data tidak seimbang menghasilkan akurasi yang cukup tinggi yaitu 83,53% pada ruang warna YUV dengan pengujian pada data latih dan akurasi 74,40% dengan data uji pada semua ruang warna.</p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Increasing the number of people with diabetes is one of the factors causing the high number of people with diabetic retinopathy. One of the images used by ophthalmologists to identify diabetic retinopathy is a retinal photo. In this research, the identification of diabetic retinopathy is done automatically using retinal fundus images and the Convolutional Neural Network (CNN) algorithm, which is a variation of the Deep Learning algorithm. The obstacle found in the recognition process is the color of the retina which tends to be yellowish red so that the RGB color space does not produce optimal accuracy. Therefore, in this research, various color spaces were tested to get better results. From the results of trials using 1000 images data in the color space of RGB, HSI, YUV and L * a * b * give suboptimal results on balanced data where the best accuracy is still below 50%. However, the unbalanced data gives a fairly high accuracy of 83.53% with training data on the YUV color space and 74,40% with testing data on all color spaces.</em></p><p><em><strong><br /></strong></em></p>


2020 ◽  
Vol 24 (5) ◽  
pp. 1065-1086
Author(s):  
Kudakwashe Zvarevashe ◽  
Oludayo O. Olugbara

Speech emotion recognition has become the heart of most human computer interaction applications in the modern world. The growing need to develop emotionally intelligent devices has opened up a lot of research opportunities. Most researchers in this field have applied the use of handcrafted features and machine learning techniques in recognising speech emotion. However, these techniques require extra processing steps and handcrafted features are usually not robust. They are computationally intensive because the curse of dimensionality results in low discriminating power. Research has shown that deep learning algorithms are effective for extracting robust and salient features in dataset. In this study, we have developed a custom 2D-convolution neural network that performs both feature extraction and classification of vocal utterances. The neural network has been evaluated against deep multilayer perceptron neural network and deep radial basis function neural network using the Berlin database of emotional speech, Ryerson audio-visual emotional speech database and Surrey audio-visual expressed emotion corpus. The described deep learning algorithm achieves the highest precision, recall and F1-scores when compared to other existing algorithms. It is observed that there may be need to develop customized solutions for different language settings depending on the area of applications.


Sign in / Sign up

Export Citation Format

Share Document