scholarly journals Semi-Supervised Learning to Perceive Children's Affective States in a Tablet Tutor

2020 ◽  
Vol 34 (09) ◽  
pp. 13350-13357
Author(s):  
Mansi Agarwal ◽  
Jack Mostow

Like good human tutors, intelligent tutoring systems should detect and respond to students' affective states. However, accuracy in detecting affective states automatically has been limited by the time and expense of manually labeling training data for supervised learning. To combat this limitation, we use semi-supervised learning to train an affective state detector on a sparsely labeled, culturally novel, authentic data set in the form of screen capture videos from a Swahili literacy and numeracy tablet tutor in Tanzania that shows the face of the child using it. We achieved 88% leave-1-child-out cross-validated accuracy in distinguishing pleasant, unpleasant, and neutral affective states, compared to only 61% for the best supervised learning method we tested. This work contributes toward using automated affect detection both off-line to improve the design of intelligent tutors, and at runtime to respond to student affect based on input from a user-facing tablet camera or webcam.

2014 ◽  
Vol 24 (38) ◽  
pp. 97
Author(s):  
Antonio Rico-Sulayes

<p align="justify">This article proposes the architecture for a system that uses previously learned weights to sort query results from unstructured data bases when building specialized dictionaries. A common resource in the construction of dictionaries, unstructured data bases have been especially useful in providing information about lexical items frequencies and examples in use. However, when building specialized dictionaries, whose selection of lexical items does not rely on frequency, the use of these data bases gets restricted to a simple provider of examples. Even in this task, the information unstructured data bases provide may not be very useful when looking for specialized uses of lexical items with various meanings and very long lists of results. In the face of this problem, long lists of hits can be rescored based on a supervised learning model that relies on previously helpful results. The allocation of a vast set of high quality training data for this rescoring system is reported here. Finally, the architecture of sucha system,an unprecedented tool in specialized lexicography, is proposed.</p>


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pengcheng Li ◽  
Qikai Liu ◽  
Qikai Cheng ◽  
Wei Lu

Purpose This paper aims to identify data set entities in scientific literature. To address poor recognition caused by a lack of training corpora in existing studies, a distant supervised learning-based approach is proposed to identify data set entities automatically from large-scale scientific literature in an open domain. Design/methodology/approach Firstly, the authors use a dictionary combined with a bootstrapping strategy to create a labelled corpus to apply supervised learning. Secondly, a bidirectional encoder representation from transformers (BERT)-based neural model was applied to identify data set entities in the scientific literature automatically. Finally, two data augmentation techniques, entity replacement and entity masking, were introduced to enhance the model generalisability and improve the recognition of data set entities. Findings In the absence of training data, the proposed method can effectively identify data set entities in large-scale scientific papers. The BERT-based vectorised representation and data augmentation techniques enable significant improvements in the generality and robustness of named entity recognition models, especially in long-tailed data set entity recognition. Originality/value This paper provides a practical research method for automatically recognising data set entities in scientific literature. To the best of the authors’ knowledge, this is the first attempt to apply distant learning to the study of data set entity recognition. The authors introduce a robust vectorised representation and two data augmentation strategies (entity replacement and entity masking) to address the problem inherent in distant supervised learning methods, which the existing research has mostly ignored. The experimental results demonstrate that our approach effectively improves the recognition of data set entities, especially long-tailed data set entities.


2021 ◽  
Vol 17 (12) ◽  
pp. 155014772110599
Author(s):  
Zhong Li ◽  
Huimin Zhuang

Nowadays, in the industrial Internet of things, address resolution protocol attacks are still rampant. Recently, the idea of applying the software-defined networking paradigm to industrial Internet of things is proposed by many scholars since this paradigm has the advantages of flexible deployment of intelligent algorithms and global coordination capabilities. These advantages prompt us to propose a multi-factor integration-based semi-supervised learning address resolution protocol detection method deployed in software-defined networking, called MIS, to specially solve the problems of limited labeled training data and incomplete features extraction in the traditional address resolution protocol detection methods. In MIS method, we design a multi-factor integration-based feature extraction method and propose a semi-supervised learning framework with differential priority sampling. MIS considers the address resolution protocol attack features from different aspects to help the model make correct judgment. Meanwhile, the differential priority sampling enables the base learner in self-training to learn efficiently from the unlabeled samples with differences. We conduct experiments based on a real data set collected from a deepwater port and a simulated data set. The experiments show that MIS can achieve good performance in detecting address resolution protocol attacks with F1-measure, accuracy, and area under the curve of 97.28%, 99.41%, and 98.36% on average. Meanwhile, compared with fully supervised learning and other popular address resolution protocol detection methods, MIS also shows the best performance.


2019 ◽  
Vol 8 (4) ◽  
pp. 12842-12845

Automating the analysis of facial expressions of individuals is one of the challenging tasks in opinion mining. In this work, the proposed technique for identifying the face of an individual and the emotions, if present from a live camera. Expression detection is one of the sub-areas of computer visions which is capable of finding a person from a digital image and identify the facial expression which are the key factors of nonverbal communication. Complexity involves mainly in two cases viz., 1)if more than one emotions coexist on a face. 2) expressing same emotion between individuals is not exactly same. Our aim was to make the processes automatic by identify the expressions of people in a live video. In this system OpenCV library containing face recognizer module for detecting the face and for training the model. It was able to identify the seven different expressions with 75-85% accuracy. The expressions identified are happy, sadness, disgust, fear, anger, surprise and neutral. The this an image frame from is captured from the video, locate the face in it and then test it against the training data for predicting the emotion and update the result. This process is continued till the video input exists. On top of this the data set for training should be in such a way that , it prediction should be independent of age, gender, skin color orientation of the human face in the video and also the lamination around the subject of reference


2017 ◽  
Vol 29 (5) ◽  
pp. 864-876 ◽  
Author(s):  
Masahiko Mikawa ◽  

We are developing a robotic system for an asteroid surface exploration. The system consists ofmultiplesmall size rovers, that communicate with each other over a wireless network. Since the rovers configure over a wireless mesh sensor network on an asteroid, it is possible to explore a large area on the asteroid effectively. The rovers will be equipped with a hopping mechanism for transportation, which is suitable for exploration in a micro-gravity environment like a small asteroid’s surface. However, it is difficult to control the rover’s attitude during the landing. Therefore, a cube-shaped rover was designed. As every face has two antennas respectively, the rover has a total of twelve antennas. Furthermore, as the body shape and the antenna arrangements are symmetric, irrespective of the face on top, a reliable communication state among the rovers can be established by selecting the proper antennas on the top face. Therefore, it is important to estimate which face of the rover is on top. This paper presents an attitude estimation method based on the received signal strength indicators (RSSIs) obtained when the twelve antennas communicate among each other. Since the RSSI values change depending on an attitude of the rover and the surrounding environment, a significantly large number of RSSIs were collected as a training data set in different kinds of environments similar to an asteroid; consequently, a classifier for estimating the rover attitude was trained from the data set. A few of the experimental results establish the validity and effectiveness of the proposed exploration system and attitude estimation method.


Author(s):  
Aijun An

Generally speaking, classification is the action of assigning an object to a category according to the characteristics of the object. In data mining, classification refers to the task of analyzing a set of pre-classified data objects to learn a model (or a function) that can be used to classify an unseen data object into one of several predefined classes. A data object, referred to as an example, is described by a set of attributes or variables. One of the attributes describes the class that an example belongs to and is thus called the class attribute or class variable. Other attributes are often called independent or predictor attributes (or variables). The set of examples used to learn the classification model is called the training data set. Tasks related to classification include regression, which builds a model from training data to predict numerical values, and clustering, which groups examples to form categories. Classification belongs to the category of supervised learning, distinguished from unsupervised learning. In supervised learning, the training data consists of pairs of input data (typically vectors), and desired outputs, while in unsupervised learning there is no a priori output.


Author(s):  
Ruoqi Wei ◽  
Ausif Mahmood

Despite the importance of few-shot learning, the lack of labeled training data in the real world, makes it extremely challenging for existing machine learning methods as this limited data set does not represent the data variance well. In this research, we suggest employing a generative approach using variational autoencoders (VAEs), which can be used specifically to optimize few-shot learning tasks by generating new samples with more intra-class variations. The purpose of our research is to increase the size of the training data set using various methods to improve the accuracy and robustness of the few-shot face recognition. Specifically, we employ the VAE generator to increase the size of the training data set, including the basic and the novel sets while utilizing transfer learning as the backend. Based on extensive experimental research, we analyze various data augmentation methods to observe how each method affects the accuracy of face recognition. We conclude that the face generation method we proposed can effectively improve the recognition accuracy rate to 96.47% using both the base and the novel sets.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mohammed Al-Mukhtar ◽  
Ameer Hussein Morad ◽  
Mustafa Albadri ◽  
MD Samiul Islam

AbstractVision loss happens due to diabetic retinopathy (DR) in severe stages. Thus, an automatic detection method applied to diagnose DR in an earlier phase may help medical doctors to make better decisions. DR is considered one of the main risks, leading to blindness. Computer-Aided Diagnosis systems play an essential role in detecting features in fundus images. Fundus images may include blood vessels, exudates, micro-aneurysm, hemorrhages, and neovascularization. In this paper, our model combines automatic detection for the diabetic retinopathy classification with localization methods depending on weakly-supervised learning. The model has four stages; in stage one, various preprocessing techniques are applied to smooth the data set. In stage two, the network had gotten deeply to the optic disk segment for eliminating any exudate's false prediction because the exudates had the same color pixel as the optic disk. In stage three, the network is fed through training data to classify each label. Finally, the layers of the convolution neural network are re-edited, and used to localize the impact of DR on the patient's eye. The framework tackles the matching technique between two essential concepts where the classification problem depends on the supervised learning method. While the localization problem was obtained by the weakly supervised method. An additional layer known as weakly supervised sensitive heat map (WSSH) was added to detect the ROI of the lesion at a test accuracy of 98.65%, while comparing with Class Activation Map that involved weakly supervised technology achieved 0.954. The main purpose is to learn a representation that collect the central localization of discriminative features in a retina image. CNN-WSSH model is able to highlight decisive features in a single forward pass for getting the best detection of lesions.


2021 ◽  
Author(s):  
Mohammed Almukhtar ◽  
Ameer Morad ◽  
Mustafa Albadri ◽  
MD Islam

Abstract Vision loss happens due to diabetic retinopathy (DR) in severe stages. Thus, an automatic detection method applied to diagnose DR in an earlier phase may help medical doctors to make better decisions. DR is considered one of the main risks, leading to blindness. Computer-Aided Diagnosis (CAD) systems play an essential role in detecting features in fundus images. Fundus images may include blood vessel area, exudates, micro-aneurysm, hemorrhages, and neovascularization. In this paper, our model combines automatic detection for the diabetic retinopathy classification with localization methods depending on weakly-supervised learning. The model has four stages; in stage one, various preprocessing techniques are applied for smoothing the data set. In stage two, the network had gotten deeply to the optic disk segment for eliminating any exudate's false prediction because the exudates had the same color pixel as the optic disk. Stage three, the network is fed through training data to classify each class label. Finally, the layers of the convolution neural network are re-edited, and the layers are used to localize the impact of DR on the eye's patient. The framework tackled the matching technique between two essential concepts where the classification problem depends on the supervised learning method. In comparison, the localization problem was obtained by the weakly supervised method. An additional layer known as weakly supervised sensitive heat map (WSSH) was added to detect the ROI of the lesion at a test accuracy of 98.65%.


Sign in / Sign up

Export Citation Format

Share Document