Criminal Detection: Study of Activation Functions and Optimizers

Face is the primary means of recognizing a person, transmitting information, communicating with others, and inferring people’s feelings, among others. Our faces will reveal more than we think. A facial image may show personal characteristics such as ethnicity, gender, age, fitness, emotion, psychology, and occupation. In addition to the recent specialisation of deep learning models, the exponential output and memory space growth of computer machines has greatly increased the role of images in recognising semantic patterns. Facial photographs can reveal those personality features in the same way as a textual message on social media reveals the author's individual characteristics. We investigate a new degree of image comprehension by using deep learning to infer a criminal proclivity from facial images. A convolutional neural network (CNN) deep learning model is used to differentiate between criminal and non-criminal facial images. Using tenfold cross-validation on a set of 5500 face pictures, the model's confusion matrix, training, and test accuracies are registered. In learning to achieve the highest test accuracy, CNN was more reliable than the SNN, which was 8% better than the SNN's test accuracy. Finally, CNN's dissection and visualization of convolutional layers showed that CNN distinguished the two sets of images based on the shape of the face, eyebrows, top of the eye, pupils, nostrils, and lips. In this project we focus on Activation functions and optimizers. Activation functions are of two types Saturated and Non-Saturated. Here we use non saturated activation functions like ReLU, SELU and SOFTMAX. When we combine ReLU and SOFTMAX, we get 99.3 percentages as test accuracy. By combining SELU and SOFTMAX we get 99.6 as test accuracy. Therefore, SELU and SOFTMAX combination give the better accuracy

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Mahdi Hashemi ◽  
Margeret Hall

AbstractExplosive performance and memory space growth in computing machines, along with recent specialization of deep learning models have radically boosted the role of images in semantic pattern recognition. In the same way that a textual post on social media reveals individual characteristics of its author, facial images may manifest some personality traits. This work is the first milestone in our attempt to infer personality traits from facial images. With this ultimate goal in mind, here we explore a new level of image understanding, inferring criminal tendency from facial images via deep learning. In particular, two deep learning models, including a standard feedforward neural network (SNN) and a convolutional neural network (CNN) are applied to discriminate criminal and non-criminal facial images. Confusion matrix and training and test accuracies are reported for both models, using tenfold cross-validation on a set of 10,000 facial images. The CNN was more consistent than the SNN in learning to reach its best test accuracy, which was 8% higher than the SNN’s test accuracy. Next, to explore the classifier’s hypothetical bias due to gender, we controlled for gender by applying only male facial images. No meaningful discrepancies in classification accuracies or learning consistencies were observed, suggesting little to no gender bias in the classifier. Finally, dissecting and visualizing convolutional layers in CNN showed that the shape of the face, eyebrows, top of the eye, pupils, nostrils, and lips are taken advantage of by CNN in order to classify the two sets of images.


2018 ◽  
Vol 6 (3) ◽  
pp. 122-126
Author(s):  
Mohammed Ibrahim Khan ◽  
◽  
Akansha Singh ◽  
Anand Handa ◽  
◽  
...  

2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


2021 ◽  
Vol 11 (6) ◽  
pp. 2723
Author(s):  
Fatih Uysal ◽  
Fırat Hardalaç ◽  
Ozan Peker ◽  
Tolga Tolunay ◽  
Nil Tokgöz

Fractures occur in the shoulder area, which has a wider range of motion than other joints in the body, for various reasons. To diagnose these fractures, data gathered from X-radiation (X-ray), magnetic resonance imaging (MRI), or computed tomography (CT) are used. This study aims to help physicians by classifying shoulder images taken from X-ray devices as fracture/non-fracture with artificial intelligence. For this purpose, the performances of 26 deep learning-based pre-trained models in the detection of shoulder fractures were evaluated on the musculoskeletal radiographs (MURA) dataset, and two ensemble learning models (EL1 and EL2) were developed. The pre-trained models used are ResNet, ResNeXt, DenseNet, VGG, Inception, MobileNet, and their spinal fully connected (Spinal FC) versions. In the EL1 and EL2 models developed using pre-trained models with the best performance, test accuracy was 0.8455, 0.8472, Cohen’s kappa was 0.6907, 0.6942 and the area that was related with fracture class under the receiver operating characteristic (ROC) curve (AUC) was 0.8862, 0.8695. As a result of 28 different classifications in total, the highest test accuracy and Cohen’s kappa values were obtained in the EL2 model, and the highest AUC value was obtained in the EL1 model.


2019 ◽  
Vol 73 (5) ◽  
pp. 565-573 ◽  
Author(s):  
Yun Zhao ◽  
Mahamed Lamine Guindo ◽  
Xing Xu ◽  
Miao Sun ◽  
Jiyu Peng ◽  
...  

In this study, a method based on laser-induced breakdown spectroscopy (LIBS) was developed to detect soil contaminated with Pb. Different levels of Pb were added to soil samples in which tobacco was planted over a period of two to four weeks. Principal component analysis and deep learning with a deep belief network (DBN) were implemented to classify the LIBS data. The robustness of the method was verified through a comparison with the results of a support vector machine and partial least squares discriminant analysis. A confusion matrix of the different algorithms shows that the DBN achieved satisfactory classification performance on all samples of contaminated soil. In terms of classification, the proposed method performed better on samples contaminated for four weeks than on those contaminated for two weeks. The results show that LIBS can be used with deep learning for the detection of heavy metals in soil.


Foods ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 1633
Author(s):  
Chreston Miller ◽  
Leah Hamilton ◽  
Jacob Lahne

This paper is concerned with extracting relevant terms from a text corpus on whisk(e)y. “Relevant” terms are usually contextually defined in their domain of use. Arguably, every domain has a specialized vocabulary used for describing things. For example, the field of Sensory Science, a sub-field of Food Science, investigates human responses to food products and differentiates “descriptive” terms for flavors from “ordinary”, non-descriptive language. Within the field, descriptors are generated through Descriptive Analysis, a method wherein a human panel of experts tastes multiple food products and defines descriptors. This process is both time-consuming and expensive. However, one could leverage existing data to identify and build a flavor language automatically. For example, there are thousands of professional and semi-professional reviews of whisk(e)y published on the internet, providing abundant descriptors interspersed with non-descriptive language. The aim, then, is to be able to automatically identify descriptive terms in unstructured reviews for later use in product flavor characterization. We created two systems to perform this task. The first is an interactive visual tool that can be used to tag examples of descriptive terms from thousands of whisky reviews. This creates a training dataset that we use to perform transfer learning using GloVe word embeddings and a Long Short-Term Memory deep learning model architecture. The result is a model that can accurately identify descriptors within a corpus of whisky review texts with a train/test accuracy of 99% and precision, recall, and F1-scores of 0.99. We tested for overfitting by comparing the training and validation loss for divergence. Our results show that the language structure for descriptive terms can be programmatically learned.


2013 ◽  
Vol 321-324 ◽  
pp. 2106-2109
Author(s):  
Fei Yan Ren

One of the most important factors of management in obtaining organization targets is effectiveness of financial management structures, and user of the financial management structures have more important role in the effectiveness of the structures. The purpose of this research is to study the influence of human factors including personal and individual characteristics of user of financial management structures based on effectiveness PC. For this target, a sample includes 2354 offices, organizations, private companies and organizations than apply financial management structure based-PC. Has been selected randomly and the investigative data has been counting using questionnaires. In order to find personal characteristics of users, the particular questionnaires which are designed according to four factor model of personality, has been done. In order to research the relation between effectiveness of the structure and personality, four hypotheses based on four features of personality. Moreover, in order to find the relationship between expertise (educational level, educational field and amount of training curriculum of PC knowledge), job satisfaction and experience of users, and effectiveness of the accountancy management structure based-PC, some hypotheses have been studied and written. The study results indicates that personal characteristics including Agreeableness, openness, Conscientiousness and experience working , is efficient on the financial management structures based-PC.


2021 ◽  
Vol 5 (6) ◽  
pp. 1036-1043
Author(s):  
Ardi wijaya ◽  
Puji Rahayu ◽  
Rozali Toyib

Problems in image processing to obtain the best smile are strongly influenced by the quality, background, position, and lighting, so it is very necessary to have an analysis by utilizing existing image processing algorithms to get a system that can make the best smile selection, then the Shi-Tomasi Algorithm is used. the algorithm that is commonly used to detect the corners of the smile region in facial images. The Shi-Tomasi angle calculation processes the image effectively from a target image in the edge detection ballistic test, then a corner point check is carried out on the estimation of translational parameters with a recreation test on the translational component to identify the cause of damage to the image, it is necessary to find the edge points to identify objects with remove noise in the image. The results of the test with the shi-Tomasi algorithm were used to detect a good smile from 20 samples of human facial images with each sample having 5 different smile images, with test data totaling 100 smile images, the success of the Shi-Tomasi Algorithm in detecting a good smile reached an accuracy value of 95% using the Confusion Matrix, Precision, Recall and Accuracy Methods.


2021 ◽  
pp. 20200172
Author(s):  
Münevver Coruh Kılıc ◽  
Ibrahim Sevki Bayrakdar ◽  
Özer Çelik ◽  
Elif Bilgir ◽  
Kaan Orhan ◽  
...  

Objective: This study evaluated the use of a deep-learning approach for automated detection and numbering of deciduous teeth in children as depicted on panoramic radiographs. Methods and materials: An artificial intelligence (AI) algorithm (CranioCatch, Eskisehir-Turkey) using Faster R-CNN Inception v2 (COCO) models were developed to automatically detect and number deciduous teeth as seen on pediatric panoramic radiographs. The algorithm was trained and tested on a total of 421 panoramic images. System performance was assessed using a confusion matrix. Results: The AI system was successful in detecting and numbering the deciduous teeth of children as depicted on panoramic radiographs. The sensitivity and precision rates were high. The estimated sensitivity, precision, and F1 score were 0.9804, 0.9571, and 0.9686, respectively. Conclusion: Deep-learning-based AI models are a promising tool for the automated charting of panoramic dental radiographs from children. In addition to serving as a time-saving measure and an aid to clinicians, AI plays a valuable role in forensic identification.


2021 ◽  
pp. 1-11
Author(s):  
Oscar Herrera ◽  
Belém Priego

Traditionally, a few activation functions have been considered in neural networks, including bounded functions such as threshold, sigmoidal and hyperbolic-tangent, as well as unbounded ReLU, GELU, and Soft-plus, among other functions for deep learning, but the search for new activation functions still being an open research area. In this paper, wavelets are reconsidered as activation functions in neural networks and the performance of Gaussian family wavelets (first, second and third derivatives) are studied together with other functions available in Keras-Tensorflow. Experimental results show how the combination of these activation functions can improve the performance and supports the idea of extending the list of activation functions to wavelets which can be available in high performance platforms.


Sign in / Sign up

Export Citation Format

Share Document