recognition experiment
Recently Published Documents


TOTAL DOCUMENTS

75
(FIVE YEARS 20)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Vol 2138 (1) ◽  
pp. 012021
Author(s):  
Zhongdong Wang

Abstract As an important raw material and petrochemical tool, petroleum not only brings convenience to mankind, but also brings huge socio-economic and cultural value to our social development, but at the same time it also causes a lot of serious damage to our ecological environment. The identification and measurement of petroleum pollutants has become the main tool to identify pollution sources, control their pollutants and protect their ecological environment. This paper explores the petroleum fluorescence spectrum identification method based on convolutional neural network. Based on extensive research on this method, a simple analysis and understanding of petroleum fluorescence spectrum identification technology and petroleum-related principles are carried out, and then summarized according to relevant data find out the main factors that affect fluorescence spectrum recognition, and prepare for the experiment. The feasibility of the method is verified through the petroleum fluorescence spectrum recognition experiment of the convolutional neural network. The experimental results show that the relative error of the fluorescence spectrum recognition under different concentrations of petroleum both are within the range of 9%. Through the analysis of the relative error, it can be seen that the relative error of resolution shows a downward trend with the increase of the concentration. According to the above data, it can be seen that when the convolutional neural network algorithm is used to identify the components of the petroleum mixed solution, the qualitative analysis can be completed well. When the components in the mixed solution are quantitatively analyzed, there is a certain relative error.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Wanyi Zhang ◽  
Qiang Shen ◽  
Stefano Teso ◽  
Bruno Lepri ◽  
Andrea Passerini ◽  
...  

AbstractVarious studies have investigated the predictability of different aspects of human behavior such as mobility patterns, social interactions, and shopping and online behaviors. However, the existing researches have been often limited to a single or to the combination of few behavioral dimensions, and they have adopted the perspective of an outside observer who is unaware of the motivations behind the specific behaviors or activities of a given individual. The key assumption of this work is that human behavior is deliberated based on an individual’s own perception of the situation that s/he is in, and that therefore it should also be studied under the same perspective. Taking inspiration from works in ubiquitous and context-aware computing, we investigate the role played by four contextual dimensions (or modalities), namely time, location, activity being carried out, and social ties, on the predictability of individuals’ behaviors, using a month of collected mobile phone sensor readings and self-reported annotations about these contextual modalities from more than two hundred study participants. Our analysis shows that any target modality (e.g. location) becomes substantially more predictable when information about the other modalities (time, activity, social ties) is made available. Multi-modality turns out to be in some sense fundamental, as some values (e.g. specific activities like “shopping”) are nearly impossible to guess correctly unless the other modalities are known. Subjectivity also has a substantial impact on predictability. A location recognition experiment suggests that subjective location annotations convey more information about activity and social ties than objective information derived from GPS measurements. We conclude the paper by analyzing how the identified contextual modalities allow to compute the diversity of personal behavior, where we show that individuals are more easily identified by rarer, rather than frequent, context annotations. These results offer support in favor of developing innovative computational models of human behaviors enriched by a characterization of the context of a given behavior.


2021 ◽  
Vol 8 (8) ◽  
pp. 201976
Author(s):  
Zhihang Tian ◽  
Dongmin Huang ◽  
Sijin Zhou ◽  
Zhidan Zhao ◽  
Dazhi Jiang

In recent years, more and more researchers have focused on emotion recognition methods based on electroencephalogram (EEG) signals. However, most studies only consider the spatio-temporal characteristics of EEG and the modelling based on this feature, without considering personality factors, let alone studying the potential correlation between different subjects. Considering the particularity of emotions, different individuals may have different subjective responses to the same physical stimulus. Therefore, emotion recognition methods based on EEG signals should tend to be personalized. This paper models the personalized EEG emotion recognition from the macro and micro levels. At the macro level, we use personality characteristics to classify the individuals’ personalities from the perspective of ‘birds of a feather flock together’. At the micro level, we employ deep learning models to extract the spatio-temporal feature information of EEG. To evaluate the effectiveness of our method, we conduct an EEG emotion recognition experiment on the ASCERTAIN dataset. Our experimental results demonstrate that the recognition accuracy of our proposed method is 72.4% and 75.9% on valence and arousal, respectively, which is 10.2% and 9.1% higher than that of no consideration of personalization.


2021 ◽  
Vol 12 (3) ◽  
pp. 1384-1393
Author(s):  
Khodijah Hulliyah Et.al

Recognizing emotions through the brain wave approach with facial or sound expression is widely used, but few use text stimuli. Therefore, this study aims to analyze the emotion recognition experiment by stimulating sentiment-tones using EEG. The process of classifying emotions uses a random forest model approach which is compared with two models, namely Support Vector Machine and decision tree as benchmarks. The raw data used comes from the results of scrapping Twitter data. The dataset of emotional annotation was carried out manually based on four classifications, specifically: happiness, sadness, fear, and anger. The annotated dataset was tested using an Electroencephalogram (EEG) device attached to the participant's head to determine the brain waves appearing after reading the text. The results showed that the random forest model has the highest accuracy level with a rate of 98% which is slightly different from the decision tree with 88%. Meanwhile, in SVM the accuracy results are less good with a rate of 32%. Furthermore, the match level of angry emotions from the three models above during manual annotation and using the EEG device showed a high number with an average value above 90%, because reading with angry expressions is easier to perform. For this reason, this study aims to test the emotion recognition experiment by stimulating sentiment-tones using EEG. The process of classifying emotions uses a random forest model approach which is compared with two models, namely SVM and decision tree as benchmarks. The dataset used comes from the results of scrapping Twitter data.


2021 ◽  
Vol 19 (2) ◽  
pp. 147470492199658
Author(s):  
Anne Schienle ◽  
Jonas Potthoff ◽  
Elena Schönthaler ◽  
Carina Schlintl

Studies with adults found a memory bias for disgust, such that memory for disgusting stimuli was enhanced compared to neutral and frightening stimuli. We investigated whether this bias is more pronounced in females and whether it is already present in children. Moreover, we analyzed whether the visual exploration of disgust stimuli during encoding is associated with memory retrieval. In a first recognition experiment with intentional learning, 50 adults (mean age; M = 23 years) and 52 children ( M = 11 years) were presented with disgusting, frightening, and neutral pictures. Both children and adults showed a better recognition performance for disgusting images compared to the other image categories. Males and females did not differ in their memory performance. In a second free recall experiment with eye-tracking, 50 adults ( M = 22 years) viewed images from the categories disgust, fear, and neutral. Disgusting and neutral images were matched for color, complexity, brightness, and contrast. The participants, who were not instructed to remember the stimuli, showed a disgust memory bias as well as shorter fixation durations and longer scan paths for disgusting images compared to neutral images. This “hyperscanning pattern” correlated with the number of correctly recalled disgust images. In conclusion, we found a disgust-related memory bias in both children and adults regardless of sex and independently of the memorization method used (recognition/free recall; intentional/incidental).


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1887
Author(s):  
Min-Gu Kim ◽  
Sung Bum Pan

Electrocardiogram (ECG) signals are time series data that are acquired by time change. A problem with these signals is that comparison data that have the same size as the registration data must be acquired every time. A network model of an auxiliary classifier based generative adversarial neural network that is capable of generating synthetic ECG signals is proposed to resolve the data size inconsistency problem. After constructing comparison data with various combinations of the real and generated synthetic ECG signal cycles, a user recognition experiment was performed by applying them to an ensemble network of parallel structure. Recognition performance of 98.5% was demonstrated when five cycles of real ECG signals were used. Moreover, 98.7% and 97% accuracies were provided when the first cycle of synthetic ECG signals and the fourth cycle of real ECG signals were repetitively used as the last cycle, respectively, in addition to the four cycles of real ECG. When two cycles of synthetic ECG signals were used with three cycles of real ECG signals, 97.2% accuracy was shown. When the last third cycle was repeatedly used with the three cycles of real ECG signals, the accuracy was 96%, which was 1.2% lower than the performance obtained while using the synthetic ECG. Therefore, even if the size of the registration data and that of the comparison data are not consistent, the generated synthetic ECG signals can be applied to a real life environment, because a high recognition performance is demonstrated when they are applied to an ensemble network of parallel structure.


2021 ◽  
Author(s):  
Xin Jiang ◽  
Weixun Qin ◽  
Junyan Wu ◽  
Jiang Xiao ◽  
Yue Zhong ◽  
...  

Abstract Hypoxic-ischemic brain damage (HIBD) is one of the most common critical diseases in neonates with high mortality and disability rates. The latest research showed that long non-coding RNAs(lncRNA) played an important role in the development of HIBD. Recently, acupuncture therapy has been found to be effective in the treatment of HIBD. However, the mechanism of lncRNA in acupuncture treatment of HIBD is still unclear. In this study, we investigated the role of lncRNA in acupuncture treatment of HIBD in detail. We demonstrated behavioral performance similar to cognitive deficits in HIBD rat models in the new object recognition experiment and pathological lesion of the prefrontal cortex in nissl staining. Acupuncture treatments at acupoints DU24 and GB13 was proved to be effective in alleviating behavioral deficits and brain injury. A whole transcriptome analysis was applied to investigate transcriptome changes caused by acupuncture in PFC of HIBD rats. A total of 48 mRNAs and 65 lncRNAs was identified relate to acupuncture group and model group. According to Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) analysis, we found several lncRNAs and their target mRNAs were related to PI3K-Akt signaling pathway, TNF signaling pathway and NOD-like receptor signaling pathway, etc. The results of our research may provide new perspectives on the mechanism of acupuncture and affect the diagnosis and treatment of HIBD.


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0246421
Author(s):  
Sarah Steber ◽  
Sonja Rossi

Being proficient in several foreign languages is an essential part of every-day life. In contrast to childhood, learning a new language can be highly challenging for adults. The present study aims at investigating neural mechanisms supporting very initial foreign language learning in adulthood. For this reason, subjects underwent an implicit semantic associative training in which they had to learn new pseudoword-picture pairings. Learning success was measured via a recognition experiment presenting learned versus new pseudoword-picture pairings. Neural correlates were assessed by an innovative multi-methodological approach simultaneously applying electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Results indicate memory-related processes based on familiarity and mechanisms of cognitive control to be present during initial vocabulary learning. Findings underline the fascinating plasticity of the adult brain during foreign language learning, even after a short semantic training of only 18 minutes as well as the importance of comparing evidence from different neuroscientific methods and behavioral data.


2021 ◽  
Author(s):  
Corrina Maguinness ◽  
Sonja Schall ◽  
Katharina von Kriegstein

Perception of human communication signals is often more robust when there is concurrent input from the auditory and visual sensory modality. For instance, seeing the dynamic articulatory movements of a speaker, in addition to hearing their voice, can help with understanding what is said. This is particularly evident in noisy listening conditions. Even in the absence of concurrent visual input, visual mechanisms continue to be recruited to optimise auditory processing: auditory-only speech and voice-identity recognition is superior for speakers who have been previously learned with their corresponding face, in comparison to an audio-visual control condition; an effect termed the “face-benefit”. Whether the face-benefit can assist in maintaining robust perception in noisy listening conditions, in a similar manner to concurrent visual input, is currently unknown. Here, in two behavioural experiments, we explicitly examined this hypothesis. In each experiment, participants learned a series of speakers’ voices together with their corresponding dynamic face, or a visual control image depicting the speaker’s occupation. Following learning, participants listened to auditory-only sentences spoken by the same speakers and were asked to recognise the content of the sentences (i.e., speech recognition, Experiment 1) or the identity of the speaker (i.e., voice-identity recognition, Experiment 2) in different levels of increasing auditory noise (SNR +4 dB to -8 dB). For both speech and voice-identity recognition, we observed that for participants who showed a face-benefit, the benefit increased with the degree of noise in the auditory signal (Experiment 1, 2). Taken together, these results support an audio-visual model of human auditory communication and suggest that the brain has developed a flexible system to deal with auditory uncertainty – learned visual mechanisms are recruited to enhance the recognition of the auditory signal.


2021 ◽  
pp. 1-24
Author(s):  
Sara King ◽  
Yi Ren ◽  
Kaori Idemaru ◽  
Cindi Sturtzsreetharan

Abstract Previous work on the Osaka dialect (OD) collectively suggests that this western regional variant of Japanese is associated with informality, masculinity, and affective fatherhood—social meanings that can be recruited in the construction of audio-visual media personas. This study examines the use of OD by one protagonist in the film Soshite chichi ni naru/Like father, like son, as well as the social meanings that listeners attribute to this variety of Japanese. Specifically, we ask two questions: (i) to what extent is the production of OD in the film recognizable to native speakers of Japanese, and (ii) what qualities do Japanese language users attribute to OD? A dialect recognition experiment found low recognizability of OD but high recognizability of a general ‘nonstandard Japanese’ language variety. Qualitative data revealed that Japanese language users perceived OD to index various characteristics including that of a masculine, affective father. (Perception, dialect, fatherhood, Osaka dialect, indexicality)*


Sign in / Sign up

Export Citation Format

Share Document