scholarly journals Classification of Sad Emotions and Depression Through Images Using Convolutional Neural Network (CNN)

2021 ◽  
Vol 6 (1) ◽  
pp. 90
Author(s):  
Muhammad Fathur Prayuda

The human face has various functions, especially in expressing something. The expression shown has a unique shape so that it can recognize the atmosphere of the feeling that is being felt. The appearance of a feeling is usually caused by emotion. Research on the classification of emotions has been carried out using various methods. For this study, a Convolutional Neural Network (CNN) method was used which serves as a classifier for sad and depressive emotions. The CNN method has the advantage of preprocessing convolution so that it can extract a hidden feature in an image. The dataset used in this study came from the Facial expression dataset image folders (fer2013) where the dataset used for classification was taken with a ratio of 60% training and 40% validation with the results of the trained model of 60% total loss and 68% test accuracy.

2020 ◽  
Vol 10 (2) ◽  
pp. 84 ◽  
Author(s):  
Atif Mehmood ◽  
Muazzam Maqsood ◽  
Muzaffar Bashir ◽  
Yang Shuyuan

Alzheimer’s disease (AD) may cause damage to the memory cells permanently, which results in the form of dementia. The diagnosis of Alzheimer’s disease at an early stage is a problematic task for researchers. For this, machine learning and deep convolutional neural network (CNN) based approaches are readily available to solve various problems related to brain image data analysis. In clinical research, magnetic resonance imaging (MRI) is used to diagnose AD. For accurate classification of dementia stages, we need highly discriminative features obtained from MRI images. Recently advanced deep CNN-based models successfully proved their accuracy. However, due to a smaller number of image samples available in the datasets, there exist problems of over-fitting hindering the performance of deep learning approaches. In this research, we developed a Siamese convolutional neural network (SCNN) model inspired by VGG-16 (also called Oxford Net) to classify dementia stages. In our approach, we extend the insufficient and imbalanced data by using augmentation approaches. Experiments are performed on a publicly available dataset open access series of imaging studies (OASIS), by using the proposed approach, an excellent test accuracy of 99.05% is achieved for the classification of dementia stages. We compared our model with the state-of-the-art models and discovered that the proposed model outperformed the state-of-the-art models in terms of performance, efficiency, and accuracy.


Fruit grading is a process that affect quality control and fruit-processing industries to meet the efficiency of its production and society. However, these industries have suffered from lack of standards in quality control, higher time of grading and low product output because of the use of manual methods. To meet the increasing demand of quality fruit products, fruit-processing industries must consider automating their fruit grading process. Several algorithms have been proposed over the years to achieve this purpose and their works were based on color, shape and inability to handle large dataset which resulted in slow recognition accuracy. To mitigate these flaws, we develop an automated system for grading and classification of apple using Convolutional Neural Network (CNN) used in image recognition and classification. Two models were developed from CNN using ResNet50 as its convolutional base, a process called transfer learning. The first model, the apple checker model (ACM) performs the recognition of the image with two output connections (apple and non-apple) while the apple grader model (AGM) does the classification of the image that has four output classes (spoiled, grade A, grade B & grade C) if the image is an apple. A comparison evaluation of both models were conducted and experimental results show that the ACM achieved a test accuracy of 100% while the AGM obtained recognition rate of 99.89%.The developed system may be employed in food processing industries and related life applications.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 2447-2451

Now-a-days face recognition plays a major role in identifying face of the specific person. There are different face recognition algorithms such as Eigenfaces algorithm, Local binary pattern histograms, Fisherfaces algorithm. All these algorithms face the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. In this study, the face recognition using neural network and convolutional neural network (CNN) techniques were utilized and implemented with the help of Python software 3.6.6. It is noticed that the test accuracy is improved against translation, rotation, and scale invariance in face recognition using CNN.


SinkrOn ◽  
2020 ◽  
Vol 5 (2) ◽  
pp. 199-207
Author(s):  
Mawaddah Harahap ◽  
Jefferson Jefferson ◽  
Surya Barti ◽  
Suprianto Samosir ◽  
Christi Andika Turnip

Malaria is a disease caused by plasmodium which attacks red blood cells. Diagnosis of malaria can be made by examining the patient's red blood cells using a microscope. Convolutional Neural Network (CNN) is a deep learning method that is growing rapidly. CNN is often used in image classification. The CNN process usually requires considerable resources. This is one of the weaknesses of CNN. In this study, the CNN architecture used in the classification of red blood cell images is LeNet-5 and DRNet. The data used is a segmented image of red blood cells and is secondary data. Before conducting the data training, data pre-processing and data augmentation from the dataset was carried out. The number of layers of the LeNet-5 and DRNet models were 4 and 7. The test accuracy of the LeNet-5 and DrNet models was 95% and 97.3%, respectively. From the test results, it was found that the LeNet-5 model was more suitable in terms of red blood cell classification. By using the LeNet-5 architecture, the resources used to perform classification can be reduced compared to previous studies where the accuracy obtained is also the same because the number of layers is less, which is only 4 layers


2020 ◽  
pp. 31-41
Author(s):  
admin admin ◽  
◽  
◽  
Monika Gupta

Facial expressions are the translation of the emotions such as anger, sadness, happiness, disgust felt by a person. Facial expression recognition, classification of expressions which has application in various industries such as hospitality, medical to name a few. There are various datasets available for facial expression recognition, we used FER 2013 dataset to build a classification algorithm. This algorithm classifies the emotions into seven categories namely, angry, disgust, happy, sad, fear, surprise and neutral. In traditional convolutional neural network algorithm the computing time is very large, ensemble learning significantly reduced the computing time and offered a promising accuracy. Features of images were extracted using the convolutional neural network, further these features were implemented using XGBoost and Random Forest to build classification algorithms and an accuracy of 77% and 74% was obtained. This was comparable to the accuracy obtained by traditional convolutional neural network which was 75% also with very less computing time.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2020 ◽  
Vol 14 ◽  
Author(s):  
Lahari Tipirneni ◽  
Rizwan Patan

Abstract:: Millions of deaths all over the world are caused by breast cancer every year. It has become the most common type of cancer in women. Early detection will help in better prognosis and increases the chance of survival. Automating the classification using Computer-Aided Diagnosis (CAD) systems can make the diagnosis less prone to errors. Multi class classification and Binary classification of breast cancer is a challenging problem. Convolutional neural network architectures extract specific feature descriptors from images, which cannot represent different types of breast cancer. This leads to false positives in classification, which is undesirable in disease diagnosis. The current paper presents an ensemble Convolutional neural network for multi class classification and Binary classification of breast cancer. The feature descriptors from each network are combined to produce the final classification. In this paper, histopathological images are taken from publicly available BreakHis dataset and classified between 8 classes. The proposed ensemble model can perform better when compared to the methods proposed in the literature. The results showed that the proposed model could be a viable approach for breast cancer classification.


Sign in / Sign up

Export Citation Format

Share Document