scholarly journals A Socratic epistemology for verbal emotional intelligence

2016 ◽  
Vol 2 ◽  
pp. e40
Author(s):  
Abe Kazemzadeh ◽  
James Gibson ◽  
Panayiotis Georgiou ◽  
Sungbok Lee ◽  
Shrikanth Narayanan

We describe and experimentally validate a question-asking framework for machine-learned linguistic knowledge about human emotions. Using the Socratic method as a theoretical inspiration, we develop an experimental method and computational model for computers to learn subjective information about emotions by playing emotion twenty questions (EMO20Q), a game of twenty questions limited to words denoting emotions. Using human–human EMO20Q data we bootstrap a sequential Bayesian model that drives a generalized pushdown automaton-based dialog agent that further learns from 300 human–computer dialogs collected on Amazon Mechanical Turk. The human–human EMO20Q dialogs show the capability of humans to use a large, rich, subjective vocabulary of emotion words. Training on successive batches of human–computer EMO20Q dialogs shows that the automated agent is able to learn from subsequent human–computer interactions. Our results show that the training procedure enables the agent to learn a large set of emotion words. The fully trained agent successfully completes EMO20Q at 67% of human performance and 30% better than the bootstrapped agent. Even when the agent fails to guess the human opponent’s emotion word in the EMO20Q game, the agent’s behavior of searching for knowledge makes it appear human-like, which enables the agent to maintain user engagement and learn new, out-of-vocabulary words. These results lead us to conclude that the question-asking methodology and its implementation as a sequential Bayes pushdown automaton are a successful model for the cognitive abilities involved in learning, retrieving, and using emotion words by an automated agent in a dialog setting.

2015 ◽  
Author(s):  
Abe Kazemzadeh ◽  
James Gibson ◽  
Panayiotis Georgiou ◽  
Sungbok Lee ◽  
Shrikanth Narayanan

We describe and experimentally validate a question-asking framework for machine-learned linguistic knowledge about human emotions. Using the Socratic method as a theoretical inspiration, we develop an experimental method and computational model for computers to learn subjective information about emotions by playing emotion twenty questions (EMO20Q), a game of twenty questions limited to words denoting emotions. Using human-human EMO20Q data we bootstrap a sequential Bayesian model that drives a generalized pushdown automaton-based dialog agent that further learns from 300 human-computer dialogs collected on Amazon Mechanical Turk. The human-human EMO20Q dialogs show the capability of humans to use a large, rich, subjective vocabulary of emotion words. Training on successive batches of human-computer EMO20Q dialogs shows that the automated agent is able to learn from subsequent human-computer interactions. Our results show that the training procedure enables the agent to learn a large set of emotions words. The fully trained agent successfully completes EMO20Q at 67% of human performance and 30% better than the bootstrapped agent. Even when the agent fails to guess the human opponent's emotion word in the EMO20Q game, the agent's behavior of searching for knowledge makes it appear human-like, which enables the agent maintain user engagement and learn new, out-of-vocabulary words. These results lead us to conclude that the question-asking methodology and its implementation as a sequential Bayes pushdown automaton are a successful model for the cognitive abilities involved in learning, retrieving, and using emotion words by an automated agent in a dialog setting.


2015 ◽  
Author(s):  
Abe Kazemzadeh ◽  
James Gibson ◽  
Panayiotis Georgiou ◽  
Sungbok Lee ◽  
Shrikanth Narayanan

We describe and experimentally validate a question-asking framework for machine-learned linguistic knowledge about human emotions. Using the Socratic method as a theoretical inspiration, we develop an experimental method and computational model for computers to learn subjective information about emotions by playing emotion twenty questions (EMO20Q), a game of twenty questions limited to words denoting emotions. Using human-human EMO20Q data we bootstrap a sequential Bayesian model that drives a generalized pushdown automaton-based dialog agent that further learns from 300 human-computer dialogs collected on Amazon Mechanical Turk. The human-human EMO20Q dialogs show the capability of humans to use a large, rich, subjective vocabulary of emotion words. Training on successive batches of human-computer EMO20Q dialogs shows that the automated agent is able to learn from subsequent human-computer interactions. Our results show that the training procedure enables the agent to learn a large set of emotions words. The fully trained agent successfully completes EMO20Q at 67% of human performance and 30% better than the bootstrapped agent. Even when the agent fails to guess the human opponent's emotion word in the EMO20Q game, the agent's behavior of searching for knowledge makes it appear human-like, which enables the agent maintain user engagement and learn new, out-of-vocabulary words. These results lead us to conclude that the question-asking methodology and its implementation as a sequential Bayes pushdown automaton are a successful model for the cognitive abilities involved in learning, retrieving, and using emotion words by an automated agent in a dialog setting.


2021 ◽  
Vol 4 (3) ◽  
pp. 53
Author(s):  
Yi Peng Toh ◽  
Emilie Dion ◽  
Antónia Monteiro

Butterflies possess impressive cognitive abilities, and investigations into the neural mechanisms underlying these abilities are increasingly being conducted. Exploring butterfly neurobiology may require the isolation of larval, pupal, and/or adult brains for further molecular and histological experiments. This procedure has been largely described in the fruit fly, but a detailed description of butterfly brain dissections is still lacking. Here, we provide a detailed written and video protocol for the removal of Bicyclus anynana adult, pupal, and larval brains. This species is gradually becoming a popular model because it uses a large set of sensory modalities, displays plastic and hormonally controlled courtship behaviour, and learns visual mate preference and olfactory preferences that can be passed on to its offspring. The extracted brain can be used for downstream analyses, such as immunostaining, DNA or RNA extraction, and the procedure can be easily adapted to other lepidopteran species and life stages.


Author(s):  
GOBIR MARIAM TITILOPE

Sound perception is pivotal to language acquisition and usage, and it is the bedrock for the display of linguistic knowledge in every individual. However, misperception of sounds and sound production anomalies can be language-based or cognitive oriented. The aim of this study was to assess the utterances of selected three-year-old pupils from a clinical perspective. The study basically adopts a survey research approach. Using the purposive sampling technique and the participatory observation method, twenty utterances of kindergarten pupils were recorded, transcribed and analysed both perceptually and acoustically. This study adapted a blend of the clinical phonological and clinical psycholinguistic approaches for the analysis of the selected pupils’ utterances. The results of the assessment were that even though speech disturbance characterise the language of the pupils, gender difference plays a role in cognitive and linguistic development. The female pupils are found to be less deficient than their male counterparts as their word-realisations are more appropriate and correspond more with the superstrate transcriptions. Also, in spite of the differences in the cognitive abilities of the pupils, they unconsciously adopt simplification procedures to cover up their speech deficiencies. It has been recommended that teachers have a key role to play to facilitate learning by both genders of learners in the classroom by varying their teaching methods and selecting instructional materials carefully.


2019 ◽  
Vol 9 (20) ◽  
pp. 4364 ◽  
Author(s):  
Frédéric Bousefsaf ◽  
Alain Pruski ◽  
Choubeila Maaoui

Remote pulse rate measurement from facial video has gained particular attention over the last few years. Research exhibits significant advancements and demonstrates that common video cameras correspond to reliable devices that can be employed to measure a large set of biomedical parameters without any contact with the subject. A new framework for measuring and mapping pulse rate from video is presented in this pilot study. The method, which relies on convolutional 3D networks, is fully automatic and does not require any special image preprocessing. In addition, the network ensures concurrent mapping by producing a prediction for each local group of pixels. A particular training procedure that employs only synthetic data is proposed. Preliminary results demonstrate that this convolutional 3D network can effectively extract pulse rate from video without the need for any processing of frames. The trained model was compared with other state-of-the-art methods on public data. Results exhibit significant agreement between estimated and ground-truth measurements: the root mean square error computed from pulse rate values assessed with the convolutional 3D network is equal to 8.64 bpm, which is superior to 10 bpm for the other state-of-the-art methods. The robustness of the method to natural motion and increases in performance correspond to the two main avenues that will be considered in future works.


2020 ◽  
Vol 71 (1) ◽  
pp. 193-219 ◽  
Author(s):  
Mark T. Wallace ◽  
Tiffany G. Woynaroski ◽  
Ryan A. Stevenson

During our everyday lives, we are confronted with a vast amount of information from several sensory modalities. This multisensory information needs to be appropriately integrated for us to effectively engage with and learn from our world. Research carried out over the last half century has provided new insights into the way such multisensory processing improves human performance and perception; the neurophysiological foundations of multisensory function; the time course for its development; how multisensory abilities differ in clinical populations; and, most recently, the links between multisensory processing and cognitive abilities. This review summarizes the extant literature on multisensory function in typical and atypical circumstances, discusses the implications of the work carried out to date for theory and research, and points toward next steps for advancing the field.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Faizan Ali ◽  
Abraham Terrah ◽  
Chengzhong Wu ◽  
Laiba Ali ◽  
Hui Wu

Purpose This study aims to test the effect of system quality, information quality and service quality on user engagement and its effect on smartphone travel apps’ satisfaction, love and behavioral intentions. Design/methodology/approach Using the self-selection sampling technique, data was collected from 417 respondents recruited via Amazon Mechanical Turk. Data was subjected to partial least squares based structural equation modeling. Findings Results indicate that system quality, information quality and service quality have a significantly positive impact on user engagement with smartphone travel apps. Moreover, user engagement has a positive and significant impact on smartphone app satisfaction, smartphone app love and behavioral intentions. Originality/value This is the first study to integrate DeLone and McLean (2003) updated information system success model and stimulus-organism-response model to propose a holistic model of user’s engagement with smartphone travel apps.


2019 ◽  
Vol 10 (3) ◽  
pp. 37-52
Author(s):  
Marko Kesti ◽  
Aino-Inkeri Ylitalo ◽  
Hanna Vakkala

Digital disruption and continuous productivity improvement require more from people management, thus raising the bar for leadership competencies. International studies indicate that leadership competence gaps are large and traditional leadership training methods does not seem to solve this problem. This article's findings supports this situation. The authors will open the complexity behind organizational productivity development and present game theoretical architecture that simulates management behavior effects to human performance. New methods enable practice-based learning that enables formatting leaders' behavior so that it will create long-term success with continuous change. The authors will present gamified leadership training procedure and discuss the practical learning experiences from a management simulation game. The authors' study reveals challenges at interactive leadership skills, thus, it is argued, that there seems to be problems at the leadership mind-set. Therefore, more sophisticated learning methods and tools should be used.


Author(s):  
Mohammad Rostami ◽  
Soheil Kolouri ◽  
Eric Eaton ◽  
Kyungnam Kim

Reemergence of deep Neural Networks (CNNs) has lead to high-performance supervised learning algorithms for the Electro-Optical (EO) domain classification and detection problems. This success is possible because generating huge labeled datasets has become possible using modern crowdsourcing labeling platforms such as Amazon Mechanical Turk that recruit ordinary people to label data. Unlike the EO domain, labeling the Synthetic Aperture Radar (SAR) domain data can be a lot more challenging and for various reasons using crowdsourcing platforms is not feasible for labeling the SAR domain data. As a result, training deep networks using supervised learning is more challenging in the SAR domain. In the paper,we present a new framework to train a deep neural network for classifying Synthetic Aperture Radar (SAR) images by eliminating the need for huge labeled dataset. Our idea is based on transferring knowledge from a related EO domain problem, where labeled data is easy to obtain. We transfer knowledge from the EO domain through learning a shared invariant cross-domain embedding space that is also discriminative for classification. To this end, we train two deep encoders that are coupled through their last year to map data points from the EO and the SAR domains to the shared embedding space such that the distance between the distributions of the two domains is minimized in the latent embedding space. We use the Sliced Wasserstein Distance (SWD) to measure and minimize the distance between these two distributions and use a limited number of SAR label data points to match the distributions class-conditionally. As a result of this training procedure, a classifier trained from the embedding space to the label space using mostly the EO data would generalize well on the SAR domain. We provide theoretical analysis to demonstrate why our approach is effective and validate our algorithm on the problem of ship classification in the SAR domain by comparing against several other learning competing approaches.


Author(s):  
Luca de Alfaro ◽  
Vassilis Polychronopoulos ◽  
Neoklis Polyzotis

We focus on the problem of obtaining top-k lists of items from larger itemsets, using human workers for doing comparisons among items.An example application is short-listing a large set of college applications using advanced students as workers. We describe novel efficient techniques and explore their tolerance to adversarial behavior and the tradeoffs among different measures of performance (latency, expense and quality of results). We empirically evaluate the proposed techniques against prior art using simulations as well as real crowds in Amazon Mechanical Turk. A randomized variant of the proposed algorithms achieves significant budget saves, especially for very large itemsets and large top-k lists, with negligible risk of lowering the quality of the output.


Sign in / Sign up

Export Citation Format

Share Document