scholarly journals Critical Examination of the Parametric Approaches to Analysis of the Non-Verbal Human Behavior: A Case Study in Facial Pre-Touch Interaction

2020 ◽  
Vol 10 (11) ◽  
pp. 3817
Author(s):  
Soheil Keshmiri ◽  
Masahiro Shiomi ◽  
Kodai Shatani ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

A prevailing assumption in many behavioral studies is the underlying normal distribution of the data under investigation. In this regard, although it appears plausible to presume a certain degree of similarity among individuals, this presumption does not necessarily warrant such simplifying assumptions as average or normally distributed human behavioral responses. In the present study, we examine the extent of such assumptions by considering the case of human–human touch interaction in which individuals signal their face area pre-touch distance boundaries. We then use these pre-touch distances along with their respective azimuth and elevation angles around the face area and perform three types of regression-based analyses to estimate a generalized facial pre-touch distance boundary. First, we use a Gaussian processes regression to evaluate whether assumption of normal distribution in participants’ reactions warrants a reliable estimate of this boundary. Second, we apply a support vector regression (SVR) to determine whether estimating this space by minimizing the orthogonal distance between participants’ pre-touch data and its corresponding pre-touch boundary can yield a better result. Third, we use ordinary regression to validate the utility of a non-parametric regressor with a simple regularization criterion in estimating such a pre-touch space. In addition, we compare these models with the scenarios in which a fixed boundary distance (i.e., a spherical boundary) is adopted. We show that within the context of facial pre-touch interaction, normal distribution does not capture the variability that is exhibited by human subjects during such non-verbal interaction. We also provide evidence that such interactions can be more adequately estimated by considering the individuals’ variable behavior and preferences through such estimation strategies as ordinary regression that solely relies on the distribution of their observed behavior which may not necessarily follow a parametric distribution.

2021 ◽  
Vol 12 ◽  
Author(s):  
Ridge G. Weston ◽  
Paul J. Fitzgerald ◽  
Brendon O. Watson

The anesthetic drug ketamine has been successfully repurposed as an antidepressant in human subjects. This represents a breakthrough for clinical psychopharmacology, because unlike monoaminergic antidepressants, ketamine has rapid onset, including in Major Depressive Disorder (MDD) that is resistant to conventional pharmacotherapy. This rapid therapeutic onset suggests a unique mechanism of action, which continues to be investigated in reverse translational studies in rodents. A large fraction of rodent and human studies of ketamine have focused on the effects of only a single administration of ketamine, which presents a problem because MDD is typically a persistent illness that may require ongoing treatment with this drug to prevent relapse. Here we review behavioral studies in rodents that used repeated dosing of ketamine in the forced swim test (FST), with an eye toward eventual mechanistic studies. A subset of these studies carried out additional experiments with only a single injection of ketamine for comparison, and several studies used chronic psychosocial stress, where stress is a known causative factor in some cases of MDD. We find that repeated ketamine can in some cases paradoxically produce increases in immobility in the FST, especially at high doses such as 50 or 100 mg/kg. Several studies however provide evidence that repeated dosing is more effective than a single dose at decreasing immobility, including behavioral effects that last longer. Collectively, this growing literature suggests that repeated dosing of ketamine has prominent depression-related effects in rodents, and further investigation may help optimize the use of this drug in humans experiencing MDD.


2022 ◽  
Vol 31 (2) ◽  
pp. 1-30
Author(s):  
Fahimeh Ebrahimi ◽  
Miroslav Tushev ◽  
Anas Mahmoud

Modern application stores enable developers to classify their apps by choosing from a set of generic categories, or genres, such as health, games, and music. These categories are typically static—new categories do not necessarily emerge over time to reflect innovations in the mobile software landscape. With thousands of apps classified under each category, locating apps that match a specific consumer interest can be a challenging task. To overcome this challenge, in this article, we propose an automated approach for classifying mobile apps into more focused categories of functionally related application domains. Our aim is to enhance apps visibility and discoverability. Specifically, we employ word embeddings to generate numeric semantic representations of app descriptions. These representations are then classified to generate more cohesive categories of apps. Our empirical investigation is conducted using a dataset of 600 apps, sampled from the Education, Health&Fitness, and Medical categories of the Apple App Store. The results show that our classification algorithms achieve their best performance when app descriptions are vectorized using GloVe, a count-based model of word embeddings. Our findings are further validated using a dataset of Sharing Economy apps and the results are evaluated by 12 human subjects. The results show that GloVe combined with Support Vector Machines can produce app classifications that are aligned to a large extent with human-generated classifications.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2694
Author(s):  
Sang-Yeong Jo ◽  
Jin-Woo Jeong

Visual memorability is a method to measure how easily media contents can be memorized. Predicting the visual memorability of media contents has recently become more important because it can affect the design principles of multimedia visualization, advertisement, etc. Previous studies on the prediction of the visual memorability of images generally exploited visual features (e.g., color intensity and contrast) or semantic information (e.g., class labels) that can be extracted from images. Some other works tried to exploit electroencephalography (EEG) signals of human subjects to predict the memorability of text (e.g., word pairs). Compared to previous works, we focus on predicting the visual memorability of images based on human biological feedback (i.e., EEG signals). For this, we design a visual memory task where each subject is asked to answer whether they correctly remember a particular image 30 min after glancing at a set of images sampled from the LaMemdataset. During the visual memory task, EEG signals are recorded from subjects as human biological feedback. The collected EEG signals are then used to train various classification models for prediction of image memorability. Finally, we evaluate and compare the performance of classification models, including deep convolutional neural networks and classical methods, such as support vector machines, decision trees, and k-nearest neighbors. The experimental results validate that the EEG-based prediction of memorability is still challenging, but a promising approach with various opportunities and potentials.


2017 ◽  
Vol 5 (1) ◽  
pp. 17-29 ◽  
Author(s):  
Taro Nakano ◽  
B.T. Nukala ◽  
J. Tsay ◽  
Steven Zupancic ◽  
Amanda Rodriguez ◽  
...  

Due to the serious concerns of fall risks for patients with balance disorders, it is desirable to be able to objectively identify these patients in real-time dynamic gait testing using inexpensive wearable sensors. In this work, the authors took a total of 49 gait tests from 7 human subjects (3 normal subjects and 4 patients), where each person performed 7 Dynamic Gait Index (DGI) tests by wearing a wireless gait sensor on the T4 thoracic vertebra. The raw gait data is wirelessly transmitted to a near-by PC for real-time gait data collection. To objectively identify the patients from the gait data, the authors used 4 different types of Support Vector Machine (SVM) classifiers based on the 6 features extracted from the raw gait data: Linear SVM, Quadratic SVM, Cubic SVM, and Gaussian SVM. The Linear SVM, Quadratic SVM and Cubic SVM all achieved impressive 98% classification accuracy, with 95.2% sensitivity and 100% specificity in this work. However, the Gaussian SVM classifier only achieved 87.8% accuracy, 71.7% sensitivity, and 100% specificity. The results obtained with this small number of human subjects indicates that in the near future, the authors should be able to objectively identify balance-disorder patients from normal subjects during real-time dynamic gaits testing using intelligent SVM classifiers.


2011 ◽  
Vol 314-316 ◽  
pp. 2482-2485 ◽  
Author(s):  
Shu Guang He ◽  
Chuan Yan Zhang

A SVDD (Support Vector Data Description) based MCUSUM (Multivariate Cumulative Sum) chart is proposed and referred as S-MCUSUM chart, which has an advantage of distribution free. Numerical experiments on the performance of the S-MCUSUM chart is compared to the COT (Cumulative of T) chart. The results show that the COT chart is somewhat better than the S-MCUSUM chart for multivariate normally distributed data. However, the S-MCUSUM chart is much better than the COT chart for banana-shaped distributed data which is a typical non-normal distribution.


2020 ◽  
Author(s):  
Mostafa Alizadeh ◽  
George Shaker ◽  
Safieddin Safavi-Naeini

Abstract There are many patients who require continuous monitoring of vital signs and their sleep position such as bedbound patients and hospitalized patients. Also, in some cases, like COVID-19, it is critical for a caregiver to keep a safe distance to the patient. For remote monitoring, radar technologies have been shown to be promising. Thus, in this paper, we present a novel solution for the remote breath and sleep position monitoring by using a multi-input-multi-output (MIMO) radar. Our proposed system could monitor a number of people simultaneously, and therein we use a high-resolution direction of arrival (DOA) detection for finding close targets. Furthermore, the sleep position of each target is determined using a support vector machine (SVM) classifier. The breath analysis involves designing an optimum filter for estimating both the breathing rate and the noiseless breathing waveform. Furthermore, we tested the system by hand-made targets and real human targets. The radar placed in a bedroom environment above a bed where two subjects were sleeping next to each other. For the breathing rate, the accuracy of the radar is more than 97% for human subjects compared with a reference sensor. Also, the sleep position correct detection is more than 83%.


2008 ◽  
Vol 2 (4) ◽  
Author(s):  
Claudio Castellini

In critical human/robotic interactions such as, e.g., teleoperation by a disabled master or with insufficient bandwidth, it is highly desirable to have semi-autonomous robotic artifacts interact with a human being. Semi-autonomous grasping, for instance, consists of having a smart slave able to guess the master’s intentions and initiating a grasping sequence whenever the master wants to grasp an object in the slave’s workspace. In this paper we investigate the possibility of building such an intelligent robotic artifact by training a machine learning system on data gathered from several human subjects while trying to grasp objects in a teleoperation setup. In particular, we investigate the usefulness of gaze tracking in such a scenario. The resulting system must be light enough to be usable on-line and flexible enough to adapt to different masters, e.g., elderly and/or slow. The outcome of the experiment is that such a system, based upon Support Vector Machines, meets all the requirements, being (a) highly accurate, (b) compact and fast, and (c) largely unaffected by the subjects’ diversity. It is also clearly shown that gaze tracking significantly improves both the accuracy and compactness of the obtained models, if compared with the use of the hand position alone. The system can be trained with something like 3.5 minutes of human data in the worst case.task is neutral.


2020 ◽  
Vol 11 ◽  
Author(s):  
Joram Soch

When predicting a certain subject-level variable (e.g., age in years) from measured biological data (e.g., structural MRI scans), the decoding algorithm does not always preserve the distribution of the variable to predict. In such a situation, distributional transformation (DT), i.e., mapping the predicted values to the variable's distribution in the training data, might improve decoding accuracy. Here, we tested the potential of DT within the 2019 Predictive Analytics Competition (PAC) which aimed at predicting chronological age of adult human subjects from structural MRI data. In a low-dimensional setting, i.e., with less features than observations, we applied multiple linear regression, support vector regression and deep neural networks for out-of-sample prediction of subject age. We found that (i) when the number of features is low, no method outperforms linear regression; and (ii) except when using deep regression, distributional transformation increases decoding performance, reducing the mean absolute error (MAE) by about half a year. We conclude that DT can be advantageous when predicting variables that are non-controlled, but have an underlying distribution in healthy or diseased populations.


2008 ◽  
Vol 20 (2) ◽  
pp. 486-503 ◽  
Author(s):  
Stephen José Hanson ◽  
Yaroslav O. Halchenko

Over the past decade, object recognition work has confounded voxel response detection with potential voxel class identification. Consequently, the claim that there are areas of the brain that are necessary and sufficient for object identification cannot be resolved with existing associative methods (e.g., the general linear model) that are dominant in brain imaging methods. In order to explore this controversy we trained full brain (40,000 voxels) single TR (repetition time) classifiers on data from 10 subjects in two different recognition tasks on the most controversial classes of stimuli (house and face) and show 97.4% median out-of-sample (unseen TRs) generalization. This performance allowed us to reliably and uniquely assay the classifier's voxel diagnosticity in all individual subjects' brains. In this two-class case, there may be specific areas diagnostic for house stimuli (e.g., LO) or for face stimuli (e.g., STS); however, in contrast to the detection results common in this literature, neither the fusiform face area nor parahippocampal place area is shown to be uniquely diagnostic for faces or places, respectively.


2005 ◽  
Vol 15 (01n02) ◽  
pp. 121-128 ◽  
Author(s):  
SAMARASENA BUCHALA ◽  
NEIL DAVEY ◽  
RAY J. FRANK ◽  
MARTIN LOOMES ◽  
TIM M. GALE

Most computational models for gender classification use global information (the full face image) giving equal weight to the whole face area irrespective of the importance of the internal features. Here, we use a global and feature based representation of face images that includes both global and featural information. We use dimensionality reduction techniques and a support vector machine classifier and show that this method performs better than either global or feature based representations alone. We also present results of human subjects performance on gender classification task and evaluate how the different dimensionality reduction techniques compare with human subjects performance. The results support the psychological plausibility of the global and feature based representation.


Sign in / Sign up

Export Citation Format

Share Document