Recognizing Sarcasm in Twitter: A Comparison of Neural Network and Human Performance

2014 ◽  
Author(s):  
David Kovaz ◽  
Roger Kreuz
Author(s):  
Amira Ahmad Al-Sharkawy ◽  
Gehan A. Bahgat ◽  
Elsayed E. Hemayed ◽  
Samia Abdel-Razik Mashali

Object classification problem is essential in many applications nowadays. Human can easily classify objects in unconstrained environments easily. Classical classification techniques were far away from human performance. Thus, researchers try to mimic the human visual system till they reached the deep neural networks. This chapter gives a review and analysis in the field of the deep convolutional neural network usage in object classification under constrained and unconstrained environment. The chapter gives a brief review on the classical techniques of object classification and the development of bio-inspired computational models from neuroscience till the creation of deep neural networks. A review is given on the constrained environment issues: the hardware computing resources and memory, the object appearance and background, and the training and processing time. Datasets that are used to test the performance are analyzed according to the images environmental conditions, besides the dataset biasing is discussed.


Author(s):  
Peter S Gural

Abstract The application of a class of advanced machine learning techniques, namely deep learning, has been applied to automating the confirmation/classification of potential meteor tracks in video imagery. Deep learning is shown to perform remarkably well, even surpassing human performance, and will likely supplant the need for human visual inspection and review of collected meteor imagery. When applied to time series measurements of meteor track centroid positions and integrated intensities obtained from each video frame, a recurrent neural network (RNN) has achieved 98.1 per cent recall, which is defined as the number of true meteors properly classified as meteors. The RNN allowed only 2.1 per cent leakage, defined herein as the number of false positives that were incorrectly identified as meteors. The desire is to maximize recall to avoid missed orbit estimations, while also minimizing false alarms leaking through to the next processing stage of multi-site trajectory and orbit estimation. When two-dimensional spatial imagery is available or the temporal image sequence can be reconstructed, these results climb to 99.94 per cent recall and only 0.4 per cent leakage when employing a convolutional neural network (CNN). This has been further generalized from a baseline of interleaved analog video to modern progressive scan digital imagery with equivalent results. The trained CNN, nicknamed MeteorNet, will be used for post-detection automated screening of potential meteor tracks and explored in the future as a potential upstream meteor detector.


Author(s):  
Adrian Wolny ◽  
Lorenzo Cerrone ◽  
Athul Vijayan ◽  
Rachele Tofanelli ◽  
Amaya Vilches Barro ◽  
...  

ABSTRACTQuantitative analysis of plant and animal morphogenesis requires accurate segmentation of individual cells in volumetric images of growing organs. In the last years, deep learning has provided robust automated algorithms that approach human performance, with applications to bio-image analysis now starting to emerge. Here, we present PlantSeg, a pipeline for volumetric segmentation of plant tissues into cells. PlantSeg employs a convolutional neural network to predict cell boundaries and graph partitioning to segment cells based on the neural network predictions. PlantSeg was trained on fixed and live plant organs imaged with confocal and light sheet microscopes. PlantSeg delivers accurate results and generalizes well across different tissues, scales, and acquisition settings. We present results of PlantSeg applications in diverse developmental contexts. PlantSeg is free and open-source, with both a command line and a user-friendly graphical interface.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Adrian Wolny ◽  
Lorenzo Cerrone ◽  
Athul Vijayan ◽  
Rachele Tofanelli ◽  
Amaya Vilches Barro ◽  
...  

Quantitative analysis of plant and animal morphogenesis requires accurate segmentation of individual cells in volumetric images of growing organs. In the last years, deep learning has provided robust automated algorithms that approach human performance, with applications to bio-image analysis now starting to emerge. Here, we present PlantSeg, a pipeline for volumetric segmentation of plant tissues into cells. PlantSeg employs a convolutional neural network to predict cell boundaries and graph partitioning to segment cells based on the neural network predictions. PlantSeg was trained on fixed and live plant organs imaged with confocal and light sheet microscopes. PlantSeg delivers accurate results and generalizes well across different tissues, scales, acquisition settings even on non plant samples. We present results of PlantSeg applications in diverse developmental contexts. PlantSeg is free and open-source, with both a command line and a user-friendly graphical interface.


Author(s):  
P E Ajmire

Machine recognition of handwriting has been improving from last decay. The task of machine learning and recognition which also include reading handwriting is closely resembling human performance is still an open problem and also the central issue of an active field of research. Many researchers are working for fully automating the process of reading, understanding and interpretation of handwritten character. This research work proposes new approaches for extracting features in context of Handwritten Marathi numeral recognition. For classification technique Artificial Network is used. The overall accuracy of recognition of handwritten Devanagari numerals is 99.67% with SVM classifier, 99% with MLP and it is 98.13with GFF.


2020 ◽  
Vol 12 (1) ◽  
pp. 1-11
Author(s):  
Philippe Chassy ◽  
Frederic Surre

The attractor hypothesis states that knowledge is encoded as topologically-defined, stable configurations of connected cell assemblies. Irrespective to its original state, a network encoding new information will thus self-organize to reach the necessary stable state. To investigate memory structure, a multimodular neural network architecture, termed Magnitron, has been developed. Magnitron is a biologically-inspired cognitive architecture that simulates digit recognition. It implements perceptual input, human visual long-term memory in the ventral visual pathway and, to a lesser extent, working memory processes. To test the attractor hypothesis a Monte Carlo simulation of 10,000 individuals has been run. Each simulated learner was trained in recognizing the ten digits from novice to expert stage. The results replicate several features of human learning. First, they show that random connectivity in long-term visual memory accounts for novices’ performance. Second, the learning curves revealed that Magnitron simulates the well-known psychological power law of practice. Third, after learning took place, performance departed from chance level and reached a minimum target of 95% of correct hits; hence simulating human performance in children (i.e., when digits are learned). Magnitron also replicates biological findings. In line with research using voxel-based morphometry, Magnitron showed that matter density increases while training is taken place. Crucially, the spatial analysis of the connectivity patterns in long-term visual memory supported the hypothesis of a stable attractor. The significance of these results regarding memory theory is discussed.


2020 ◽  
Author(s):  
Nicholas Menghi ◽  
Kemal Kacar ◽  
Will Penny

AbstractThis paper uses constructs from the field of multitask machine learning to define pairs of learning tasks that either shared or did not share a common subspace. Human subjects then learnt these tasks using a feedback-based approach. We found, as hypothesised, that subject performance was significantly higher on the second task if it shared the same subspace as the first, an advantage that played out most strongly at the beginning of the second task. Additionally, accuracy was positively correlated over subjects learning same-subspace tasks but was not correlated for those learning different-subspace tasks. These results, and other aspects of learning dynamics, were compared to the behaviour of a Neural Network model trained using sequential Bayesian inference. Human performance was found to be consistent with a Soft Parameter Sharing variant of this model that constrained representations to be similar among tasks but only when this aided learning. We propose that the concept of shared subspaces provides a useful framework for the experimental study of human multitask and transfer learning.Author summaryHow does knowledge gained from previous experience affect learning of new tasks ? This question of “Transfer Learning” has been addressed by teachers, psychologists, and more recently by researchers in the fields of neural networks and machine learning. Leveraging constructs from machine learning, we designed pairs of learning tasks that either shared or did not share a common subspace. We compared the dynamics of transfer learning in humans with those of a multitask neural network model, finding that human performance was consistent with a soft parameter sharing variant of the model. Learning was boosted in the early stages of the second task if the same subspace was shared between tasks. Additionally, accuracy between tasks was positively correlated but only when they shared the same subspace. Our results highlight the roles of subspaces, showing how they could act as a learning boost if shared, and be detrimental if not.


Sign in / Sign up

Export Citation Format

Share Document