scholarly journals Modeling Autonomic Pupillary Responses from External Stimuli using Machine Learning

2019 ◽  
Author(s):  
David Lary

The human body exhibits a variety of autonomic responses. For example, changing light intensity provokes a change in the pupil dilation. In the past, formulae for pupil size based on luminance have been derived using traditional empirical approaches. In this paper, we present a different approach to a similar task by using machine learning to ex- amine the multivariate non-linear autonomic response of pupil dilation as a function of a comprehensive suite of more than four hundred environmental parameters leading to the provision of quantitative empirical models. The objectively optimized empirical machine learning models use a multivariate non-linear non-parametric supervised regression algorithm employing an ensemble of regression trees which receive input data from both spectral and biometric data. The models for predicting the participant’s pupil diameters from the input data had a fidelity of at least 96.9% for both the training and independent validation data sets. The most important inputs were the light levels (irradiance) of the wavelengths near 562 nm. This coincides with the peak sensitivity of the long-wave photosensitive cones in the retina, which exhibit a maximum absorbance around λmax = 562.8 ± 4.7 nm.

2021 ◽  
Author(s):  
Jessica Röhner ◽  
Philipp Thoss ◽  
Astrid Schütz

Research has shown that even experts cannot detect faking above chance, but recent studies have suggested that machine learning may help in this endeavor. However, faking differs between faking conditions, previous efforts have not taken these differences into account, and faking indices have yet to be integrated into such approaches. We reanalyzed seven data sets (N = 1,039) with various faking conditions (high and low scores, different constructs, naïve and informed faking, faking with and without practice, different measures [self-reports vs. implicit association tests; IATs]). We investigated the extent to which and how machine learning classifiers could detect faking under these conditions and compared different input data (response patterns, scores, faking indices) and different classifiers (logistic regression, random forest, XGBoost). We also explored the features that classifiers used for detection. Our results show that machine learning has the potential to detect faking, but detection success varies between conditions from chance levels to 100%. There were differences in detection (e.g., detecting low-score faking was better than detecting high-score faking). For self-reports, response patterns and scores were comparable with regard to faking detection, whereas for IATs, faking indices and response patterns were superior to scores. Logistic regression and random forest worked about equally well and outperformed XGBoost. In most cases, classifiers used more than one feature (faking occurred over different pathways), and the features varied in their relevance. Our research supports the assumption of different faking processes and explains why detecting faking is a complex endeavor.


2021 ◽  
Author(s):  
Fumio Machida

N-version machine learning system (MLS) is an architectural approach to reduce error outputs from a system by redundant configuration using multiple machine learning (ML) modules. Improved system reliability achieved by N-version MLS inherently depends on how diverse ML models are employed and how diverse input data sets are given. However, neither error input spaces of individual ML models nor input data distributions are obtainable in practice, which is a fundamental barrier to understanding the reliability gain by N-version architecture. In this paper, we introduce two diversity measures quantifying the similarities of ML models’ capabilities and the interdependence of input data sets, respectively. The defined measures are used to formulate the reliability of an elemental N-version MLS called dependent double-modules double-inputs MLS. The system is assumed to fail when two ML modules output errors simultaneously for the same classification task. The reliabilities of different architecture options for this MLS are comprehensively analyzed through a compact matrix representation form of the proposed reliability model. Except for limiting cases, we observe that the architecture exploiting two diversities tends to achieve preferable reliability under reasonable assumptions. Intuitive relations between diversity parameters and architecture reliabilities are also demonstrated through numerical experiments with hypothetical settings.


2020 ◽  
Vol 197 ◽  
pp. 04001
Author(s):  
Francesco Salamone ◽  
Alice Bellazzi ◽  
Lorenzo Belussi ◽  
Gianfranco Damato ◽  
Ludovico Danza ◽  
...  

Personal Thermal Comfort models differ from the steady-state methods because they consider personal user feedback as target value. Today, the availability of integrated “smart” devices following the concept of the Internet of Things and Machine Learning (ML) techniques allows developing frameworks reaching optimized indoor thermal comfort conditions. The article investigates the potential of such approach through an experimental campaign in a test cell, involving 25 participants in a Real (R) and Virtual (VR) scenario, aiming at evaluating the effect of external stimuli on personal thermal perception, such as the variation of colours and images of the environment. A dataset with environmental parameters, biometric data and the perceived comfort feedbacks of the participants is defined and managed with ML algorithms in order to identify the most suitable one and the most influential variables that can be used to predict the Personal Thermal Comfort Perception (PTCP). The results identify the Extra Trees classifier as the best algorithm. In both R and VR scenario a different group of variables allows predicting PTCP with high accuracy.


2021 ◽  
Author(s):  
Fumio Machida

N-version machine learning system (MLS) is an architectural approach to reduce error outputs from a system by redundant configuration using multiple machine learning (ML) modules. Improved system reliability achieved by N-version MLS inherently depends on how diverse ML models are employed and how diverse input data sets are given. However, neither error input spaces of individual ML models nor input data distributions are obtainable in practice, which is a fundamental barrier to understanding the reliability gain by N-version architecture. In this paper, we introduce two diversity measures quantifying the similarities of ML models’ capabilities and the interdependence of input data sets, respectively. The defined measures are used to formulate the reliability of an elemental N-version MLS called dependent double-modules double-inputs MLS. The system is assumed to fail when two ML modules output errors simultaneously for the same classification task. The reliabilities of different architecture options for this MLS are comprehensively analyzed through a compact matrix representation form of the proposed reliability model. Except for limiting cases, we observe that the architecture exploiting two diversities tends to achieve preferable reliability under reasonable assumptions. Intuitive relations between diversity parameters and architecture reliabilities are also demonstrated through numerical experiments with hypothetical settings.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. V163-V177 ◽  
Author(s):  
Yongna Jia ◽  
Jianwei Ma

Machine learning (ML) systems can automatically mine data sets for hidden features or relationships. Recently, ML methods have become increasingly used within many scientific fields. We have evaluated common applications of ML, and then we developed a novel method based on the classic ML method of support vector regression (SVR) for reconstructing seismic data from under-sampled or missing traces. First, the SVR method mines a continuous regression hyperplane from training data that indicates the hidden relationship between input data with missing traces and output completed data, and then it interpolates missing seismic traces for other input data by using the learned hyperplane. The key idea of our new ML method is significantly different from that of many previous interpolation methods. Our method depends on the characteristics of the training data, rather than the assumptions of linear events, sparsity, or low rank. Therefore, it can break out the previous assumptions or constraints and show universality to different data sets. In addition, our method dramatically reduces the manual workload; for example, it allows users to avoid selecting the window size parameters, as is required for methods based on the assumption of linear events. The ML method facilitates intelligent interpolation between data sets with similar geomorphological structures, which can significantly reduce costs in engineering applications. Furthermore, we combine a sparse transform called the data-driven tight frame (so-called compressed learning) with the SVR method to improve the training performance, in which the training is implemented in a sparse coefficient domain rather than in the data domain. Numerical experiments show the competitive performance of our method in comparison with the traditional [Formula: see text]-[Formula: see text] interpolation method.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA269-WA277
Author(s):  
Xudong Duan ◽  
Jie Zhang

Picking the first breaks from seismic data is often a challenging problem and still requires significant human effort. We have developed an iterative process that applies a traditional seismic automated picking method to obtain preliminary first breaks and then uses a machine learning (ML) method to identify, remove, and fix poor picks based on a multitrace analysis. The ML method involves constructing a convolutional neural network architecture to help identify poor picks across multiple traces and eliminate them. We then further refill the picks on empty traces with the help of the trained model. To allow training samples applicable to various regions and different data sets, we apply moveout correction with preliminary picks and address the picks in the flattened input. We collect 11,239,800 labeled seismic traces. During the training process, the model’s classification accuracy on the training and validation data sets reaches 98.2% and 97.3%, respectively. We also evaluate the precision and recall rate, both of which exceed 94%. For prediction, the results of 2D and 3D data sets that differ from the training data sets are used to demonstrate the feasibility of our method.


2016 ◽  
Vol 3 (1) ◽  
Author(s):  
LAL SINGH ◽  
PARMEET SINGH ◽  
RAIHANA HABIB KANTH ◽  
PURUSHOTAM SINGH ◽  
SABIA AKHTER ◽  
...  

WOFOST version 7.1.3 is a computer model that simulates the growth and production of annual field crops. All the run options are operational through a graphical user interface named WOFOST Control Center version 1.8 (WCC). WCC facilitates selecting the production level, and input data sets on crop, soil, weather, crop calendar, hydrological field conditions, soil fertility parameters and the output options. The files with crop, soil and weather data are explained, as well as the run files and the output files. A general overview is given of the development and the applications of the model. Its underlying concepts are discussed briefly.


2021 ◽  
Vol 34 (2) ◽  
pp. 541-549 ◽  
Author(s):  
Leihong Wu ◽  
Ruili Huang ◽  
Igor V. Tetko ◽  
Zhonghua Xia ◽  
Joshua Xu ◽  
...  

2021 ◽  
Vol 13 (13) ◽  
pp. 2433
Author(s):  
Shu Yang ◽  
Fengchao Peng ◽  
Sibylle von Löwis ◽  
Guðrún Nína Petersen ◽  
David Christian Finger

Doppler lidars are used worldwide for wind monitoring and recently also for the detection of aerosols. Automatic algorithms that classify the lidar signals retrieved from lidar measurements are very useful for the users. In this study, we explore the value of machine learning to classify backscattered signals from Doppler lidars using data from Iceland. We combined supervised and unsupervised machine learning algorithms with conventional lidar data processing methods and trained two models to filter noise signals and classify Doppler lidar observations into different classes, including clouds, aerosols and rain. The results reveal a high accuracy for noise identification and aerosols and clouds classification. However, precipitation detection is underestimated. The method was tested on data sets from two instruments during different weather conditions, including three dust storms during the summer of 2019. Our results reveal that this method can provide an efficient, accurate and real-time classification of lidar measurements. Accordingly, we conclude that machine learning can open new opportunities for lidar data end-users, such as aviation safety operators, to monitor dust in the vicinity of airports.


Sign in / Sign up

Export Citation Format

Share Document