Facial Emotions Recognition in Machine Learning

2021 ◽  
Vol 69 (4) ◽  
pp. 87-94
Author(s):  
Radu-Daniel BOLCAȘ ◽  
◽  
Diana DRANGA ◽  

Facial expression recognition (FER) is a field where many researchers have tried to create a model able to recognize emotions from a face. With many applications such as interfaces between human and machine, safety or medical, this field has continued to develop with the increase of processing power. This paper contains a broad description on the psychological aspects of the FER and provides a description on the datasets and algorithms that make the neural networks possible. Then a literature review is performed on the recent studies in the facial emotion recognition detailing the methods and algorithms used to improve the capabilities of systems using machine learning. Each interesting aspect of the studies are discussed to highlight the novelty and related concepts and strategies that make the recognition attain a good accuracy. In addition, challenges related to machine learning were discussed, such as overfitting, possible causes and solutions and challenges related to the dataset such as expression unrelated discrepancy such as head orientation, illumination, dataset class bias. Those aspects are discussed in detail, as a review was performed with the difficulties that come with using deep neural networks serving as a guideline to the advancement domain. Finally, those challenges offer an insight in what possible future directions can be taken to develop better FER systems.

2020 ◽  
pp. 57-63
Author(s):  
admin admin ◽  
◽  
◽  
◽  
◽  
...  

The human facial emotions recognition has attracted interest in the field of Artificial Intelligence. The emotions on a human face depicts what’s going on inside the mind. Facial expression recognition is the part of Facial recognition which is gaining more importance and need for it increases tremendously. Though there are methods to identify expressions using machine learning and Artificial Intelligence techniques, this work attempts to use convolution neural networks to recognize expressions and classify the expressions into 6 emotions categories. Various datasets are investigated and explored for training expression recognition models are explained in this paper and the models which are used in this paper are VGG 19 and RESSNET 18. We included facial emotional recognition with gender identification also. In this project we have used fer2013 and ck+ dataset and ultimately achieved 73% and 94% around accuracies respectively.


Atmosphere ◽  
2020 ◽  
Vol 11 (8) ◽  
pp. 823
Author(s):  
Ting Peng ◽  
Xiefei Zhi ◽  
Yan Ji ◽  
Luying Ji ◽  
Ye Tian

The extended range temperature prediction is of great importance for public health, energy and agriculture. The two machine learning methods, namely, the neural networks and natural gradient boosting (NGBoost), are applied to improve the prediction skills of the 2-m maximum air temperature with lead times of 1–35 days over East Asia based on the Environmental Modeling Center, Global Ensemble Forecast System (EMC-GEFS), under the Subseasonal Experiment (SubX) of the National Centers for Environmental Prediction (NCEP). The ensemble model output statistics (EMOS) method is conducted as the benchmark for comparison. The results show that all the post-processing methods can efficiently reduce the prediction biases and uncertainties, especially in the lead week 1–2. The two machine learning methods outperform EMOS by approximately 0.2 in terms of the continuous ranked probability score (CRPS) overall. The neural networks and NGBoost behave as the best models in more than 90% of the study area over the validation period. In our study, CRPS, which is not a common loss function in machine learning, is introduced to make probabilistic forecasting possible for traditional neural networks. Moreover, we extend the NGBoost model to atmospheric sciences of probabilistic temperature forecasting which obtains satisfying performances.


Author(s):  
Yuxiao Dong ◽  
Ziniu Hu ◽  
Kuansan Wang ◽  
Yizhou Sun ◽  
Jie Tang

Representation learning has offered a revolutionary learning paradigm for various AI domains. In this survey, we examine and review the problem of representation learning with the focus on heterogeneous networks, which consists of different types of vertices and relations. The goal of this problem is to automatically project objects, most commonly, vertices, in an input heterogeneous network into a latent embedding space such that both the structural and relational properties of the network can be encoded and preserved. The embeddings (representations) can be then used as the features to machine learning algorithms for addressing corresponding network tasks. To learn expressive embeddings, current research developments can fall into two major categories: shallow embedding learning and graph neural networks. After a thorough review of the existing literature, we identify several critical challenges that remain unaddressed and discuss future directions. Finally, we build the Heterogeneous Graph Benchmark to facilitate open research for this rapidly-developing topic.


2021 ◽  
Vol 13 (01) ◽  
pp. 2150001 ◽  
Author(s):  
Shoujing Zheng ◽  
Zishun Liu

We propose a machine learning embedded method of parameters determination in the constitutional models of hydrogels. It is found that the developed logistic regression-like algorithm for hydrogel swelling allows us to determine the fitting parameters based on known swelling ratio and chemical potential. We also put forward the neural networks-like algorithm, which, by its own property, can converge faster as the layer deepens. We then develop neural networks-like algorithm for hydrogel under uniaxial load for experimental application purpose. Finally, we propose several machine learning embedded potential applications for hydrogels, which would provide directions for machine learning-based hydrogel research.


2020 ◽  
pp. 1-10
Author(s):  
Roser Morante ◽  
Eduardo Blanco

Abstract Negation is a complex linguistic phenomenon present in all human languages. It can be seen as an operator that transforms an expression into another expression whose meaning is in some way opposed to the original expression. In this article, we survey previous work on negation with an emphasis on computational approaches. We start defining negation and two important concepts: scope and focus of negation. Then, we survey work in natural language processing that considers negation primarily as a means to improve the results in some task. We also provide information about corpora containing negation annotations in English and other languages, which usually include a combination of annotations of negation cues, scopes, foci, and negated events. We continue the survey with a description of automated approaches to process negation, ranging from early rule-based systems to systems built with traditional machine learning and neural networks. Finally, we conclude with some reflections on current progress and future directions.


Author(s):  
Divya Choudhary ◽  
Siripong Malasri

This paper implements and compares machine learning algorithms to predict the amount of coolant required during transportation of temperature sensitive products. The machine learning models use trip duration, product threshold temperature and ambient temperature as the independent variables to predict the weight of gel packs need to keep the temperature of the product below its threshold temperature value. The weight of the gel packs can be translated to number of gel packs required. Regression using Neural Networks, Support Vector Regression, Gradient Boosted Regression and Elastic Net Regression are compared. The Neural Networks based model performs the best in terms of its mean absolute error value and r-squared values. A Neural Network model is then deployed on as webservice to score allowing for client application to make rest calls to estimate gel pack weights


2021 ◽  
Vol 21 (1) ◽  
pp. 50-61
Author(s):  
Chuan-Chi Wang ◽  
Ying-Chiao Liao ◽  
Ming-Chang Kao ◽  
Wen-Yew Liang ◽  
Shih-Hao Hung

In this paper, we provide a fine-grain machine learning-based method, PerfNetV2, which improves the accuracy of our previous work for modeling the neural network performance on a variety of GPU accelerators. Given an application, the proposed method can be used to predict the inference time and training time of the convolutional neural networks used in the application, which enables the system developer to optimize the performance by choosing the neural networks and/or incorporating the hardware accelerators to deliver satisfactory results in time. Furthermore, the proposed method is capable of predicting the performance of an unseen or non-existing device, e.g. a new GPU which has a higher operating frequency with less processor cores, but more memory capacity. This allows a system developer to quickly search the hardware design space and/or fine-tune the system configuration. Compared to the previous works, PerfNetV2 delivers more accurate results by modeling detailed host-accelerator interactions in executing the full neural networks and improving the architecture of the machine learning model used in the predictor. Our case studies show that PerfNetV2 yields a mean absolute percentage error within 13.1% on LeNet, AlexNet, and VGG16 on NVIDIA GTX-1080Ti, while the error rate on a previous work published in ICBD 2018 could be as large as 200%.


Author(s):  
Mahassine BEKKARI ◽  
EL FALLAHI Abdellah

In a new economy where immaterial capital is crucial, companies are increasingly aware of the necessity to efficiently manage human capital by optimizing its engagement in the workplace. The accession of the human capital through its engagement is an efficient leverage that leads to a real improvement of the companies’ performance. Despite the staple attention towards human resource management, and the efforts undertaken to satisfy and motivate the personnel, the issue of engagement still persists. The main objective of this paper is to study and model the relation between eight predictors and a response variable given by the employees’ engagement. We have used different models to figure out the relation between the predictors and the dependent variable after carrying out a survey of several employees from different companies. The techniques used in this paper are linear regression, ordinal logistic regression, Gradient Boosting Machine learning and neural networks. The data used in this study is the results of a questionnaire completed by 60 individuals. The results obtained show that the neural networks perform slightly the rest of models considering the training and validation error of modelling and also highlight the complex relation linking the predictors and the predicted.


Author(s):  
Menno A. Veerman ◽  
Robert Pincus ◽  
Robin Stoffer ◽  
Caspar M. van Leeuwen ◽  
Damian Podareanu ◽  
...  

The radiative transfer equations are well known, but radiation parametrizations in atmospheric models are computationally expensive. A promising tool for accelerating parametrizations is the use of machine learning techniques. In this study, we develop a machine learning-based parametrization for the gaseous optical properties by training neural networks to emulate a modern radiation parametrization (RRTMGP). To minimize computa- tional costs, we reduce the range of atmospheric conditions for which the neural networks are applicable and use machine-specific optimized BLAS functions to accelerate matrix computations. To generate training data, we use a set of randomly perturbed atmospheric profiles and calculate optical properties using RRTMGP. Predicted optical properties are highly accurate and the resulting radiative fluxes have average errors within 0.5 W m −2 compared to RRTMGP. Our neural network-based gas optics parametrization is up to four times faster than RRTMGP, depending on the size of the neural networks. We further test the trade-off between speed and accuracy by training neural networks for the narrow range of atmospheric conditions of a single large-eddy simulation, so smaller and therefore faster networks can achieve a desired accuracy. We conclude that our machine learning-based parametrization can speed-up radiative transfer computations while retaining high accuracy. This article is part of the theme issue ‘Machine learning for weather and climate modelling’.


2019 ◽  
Vol 36 (9) ◽  
pp. 1889-1902
Author(s):  
Magnus Hieronymus ◽  
Jenny Hieronymus ◽  
Fredrik Hieronymus

Long sea level records with high temporal resolution are of paramount importance for future coastal protection and adaptation plans. Here we discuss the application of machine learning techniques to some regression problems commonly encountered when analyzing such time series. The performance of artificial neural networks is compared with that of multiple linear regression models on sea level data from the Swedish coast. The neural networks are found to be superior when local sea level forcing is used together with remote sea level forcing and meteorological forcing, whereas the linear models and the neural networks show similar performance when local sea level forcing is excluded. The overall performance of the machine learning algorithms is good, often surpassing that of the much more computationally costly numerical ocean models used at our institute.


Sign in / Sign up

Export Citation Format

Share Document