scholarly journals Memorizing All for Implicit Discourse Relation Recognition

Author(s):  
Kashif Munir ◽  
Hongxiao Bai ◽  
Hai Zhao ◽  
Junhan Zhao

Implicit discourse relation recognition is a challenging task due to the absence of the necessary informative clues from explicit connectives. An implicit discourse relation recognizer has to carefully tackle the semantic similarity of sentence pairs and the severe data sparsity issue. In this article, we learn token embeddings to encode the structure of a sentence from a dependency point of view in their representations and use them to initialize a baseline model to make it really strong. Then, we propose a novel memory component to tackle the data sparsity issue by allowing the model to master the entire training set, which helps in achieving further performance improvement. The memory mechanism adequately memorizes information by pairing representations and discourse relations of all training instances, thus filling the slot of the data-hungry issue in the current implicit discourse relation recognizer. The proposed memory component, if attached with any suitable baseline, can help in performance enhancement. The experiments show that our full model with memorizing the entire training data provides excellent results on PDTB and CDTB datasets, outperforming the baselines by a fair margin.

2014 ◽  
Vol 26 (3) ◽  
pp. 383-400 ◽  
Author(s):  
Andrea Ellero ◽  
Paola Pellegrini

Purpose – The aim of this paper is to assess the performance of different widely-adopted models to forecast Italian hotel occupancy. In particular, the paper tests the different models for forecasting the demand in hotels located in urban areas, which typically experience both business and leisure demand, and whose demand is often affected by the presence of special events in the hotels themselves, or in their neighborhood. Design/methodology/approach – Several forecasting models that the literature reports as most suitable for hotel room occupancy data were selected. Historical data on occupancy in five Italian hotels were divided into a training set and a test set. The parameters of the models were trained and fine-tuned on the training data, obtaining one specific set for each of the five Italian hotels considered. For each hotel, each method, with corresponding best parameter choice, is used to forecast room occupancy in the test set. Findings – In the particular Italian market, models based on booking information outperform historical ones: pick-up models achieve the best results but forecasts are in any case rather poor. Research limitations/implications – The main conclusions of the analysis are that the pick-up models are the most promising ones. Nonetheless, none of the traditional forecasting models tested appears satisfactory in the Italian framework, although the data collected by the front offices can be rather rich. Originality/value – From a managerial point-of-view, the outcome of the study shows that traditional forecasting models can be considered only as a sort of “first aid” for revenue management decisions.


2021 ◽  
Vol 13 (3) ◽  
pp. 368
Author(s):  
Christopher A. Ramezan ◽  
Timothy A. Warner ◽  
Aaron E. Maxwell ◽  
Bradley S. Price

The size of the training data set is a major determinant of classification accuracy. Nevertheless, the collection of a large training data set for supervised classifiers can be a challenge, especially for studies covering a large area, which may be typical of many real-world applied projects. This work investigates how variations in training set size, ranging from a large sample size (n = 10,000) to a very small sample size (n = 40), affect the performance of six supervised machine-learning algorithms applied to classify large-area high-spatial-resolution (HR) (1–5 m) remotely sensed data within the context of a geographic object-based image analysis (GEOBIA) approach. GEOBIA, in which adjacent similar pixels are grouped into image-objects that form the unit of the classification, offers the potential benefit of allowing multiple additional variables, such as measures of object geometry and texture, thus increasing the dimensionality of the classification input data. The six supervised machine-learning algorithms are support vector machines (SVM), random forests (RF), k-nearest neighbors (k-NN), single-layer perceptron neural networks (NEU), learning vector quantization (LVQ), and gradient-boosted trees (GBM). RF, the algorithm with the highest overall accuracy, was notable for its negligible decrease in overall accuracy, 1.0%, when training sample size decreased from 10,000 to 315 samples. GBM provided similar overall accuracy to RF; however, the algorithm was very expensive in terms of training time and computational resources, especially with large training sets. In contrast to RF and GBM, NEU, and SVM were particularly sensitive to decreasing sample size, with NEU classifications generally producing overall accuracies that were on average slightly higher than SVM classifications for larger sample sizes, but lower than SVM for the smallest sample sizes. NEU however required a longer processing time. The k-NN classifier saw less of a drop in overall accuracy than NEU and SVM as training set size decreased; however, the overall accuracies of k-NN were typically less than RF, NEU, and SVM classifiers. LVQ generally had the lowest overall accuracy of all six methods, but was relatively insensitive to sample size, down to the smallest sample sizes. Overall, due to its relatively high accuracy with small training sample sets, and minimal variations in overall accuracy between very large and small sample sets, as well as relatively short processing time, RF was a good classifier for large-area land-cover classifications of HR remotely sensed data, especially when training data are scarce. However, as performance of different supervised classifiers varies in response to training set size, investigating multiple classification algorithms is recommended to achieve optimal accuracy for a project.


2014 ◽  
Vol 539 ◽  
pp. 181-184
Author(s):  
Wan Li Zuo ◽  
Zhi Yan Wang ◽  
Ning Ma ◽  
Hong Liang

Accurate classification of text is a basic premise of extracting various types of information on the Web efficiently and utilizing the network resources properly. In this paper, a brand new text classification method was proposed. Consistency analysis method is a type of iterative algorithm, which mainly trains different classifiers (weak classifier) by aiming at the same training set, and then these classifiers will be gathered for testing the consistency degrees of various classification methods for the same text, thus to manifest the knowledge of each type of classifier. It main determines the weight of each sample according to the fact is the classification of each sample is accurate in each training set, as well as the accuracy of the last overall classification, and then sends the new data set whose weight has been modified to the subordinate classifier for training. In the end, the classifier gained in the training will be integrated as the final decision classifier. The classifier with consistency analysis can eliminate some unnecessary training data characteristics and place the key words on key training data. According to the experimental result, the average accuracy of this method is 91.0%, while the average recall rate is 88.1%.


2020 ◽  
Vol 10 (6) ◽  
pp. 2104
Author(s):  
Michał Tomaszewski ◽  
Paweł Michalski ◽  
Jakub Osuchowski

This article presents an analysis of the effectiveness of object detection in digital images with the application of a limited quantity of input. The possibility of using a limited set of learning data was achieved by developing a detailed scenario of the task, which strictly defined the conditions of detector operation in the considered case of a convolutional neural network. The described solution utilizes known architectures of deep neural networks in the process of learning and object detection. The article presents comparisons of results from detecting the most popular deep neural networks while maintaining a limited training set composed of a specific number of selected images from diagnostic video. The analyzed input material was recorded during an inspection flight conducted along high-voltage lines. The object detector was built for a power insulator. The main contribution of the presented papier is the evidence that a limited training set (in our case, just 60 training frames) could be used for object detection, assuming an outdoor scenario with low variability of environmental conditions. The decision of which network will generate the best result for such a limited training set is not a trivial task. Conducted research suggests that the deep neural networks will achieve different levels of effectiveness depending on the amount of training data. The most beneficial results were obtained for two convolutional neural networks: the faster region-convolutional neural network (faster R-CNN) and the region-based fully convolutional network (R-FCN). Faster R-CNN reached the highest AP (average precision) at a level of 0.8 for 60 frames. The R-FCN model gained a worse AP result; however, it can be noted that the relationship between the number of input samples and the obtained results has a significantly lower influence than in the case of other CNN models, which, in the authors’ assessment, is a desired feature in the case of a limited training set.


Wireless sensor network incorporates an innovative aspect called as data handling technologies for big data organization. In today’s research the data aggregation occupies an important position and its emerging rapidly. Data aggregation incudes, process of accumulating the data at node, then either store or transfer further to reach out the destination. This survey depicts about the previous work on data aggregation in WSN and also its impact on the different services. There are number of data aggregation techniques available for reducing the data, processing the data and storing the data. Some of them are discussed here as a review. The data aggregation performed using certain techniques can also be aimed in having energy efficiency, time efficient, security could be in the form of confidentiality, unimpaired, authenticate, freshness, quality, data availability, access control, nonrepudiation, secrecy, secrecy. These are the relevant performance metrics to maintain the better Qos in WSNs applications. The goal of this paper is to display an overview of existing techniques for performance improvement in homogenous/ heterogenous networks.


2021 ◽  
Vol 10 (3) ◽  
pp. 79-87
Author(s):  
Susi Septi Hardiani ◽  
M. Safii ◽  
Dedi Suhendro

Toddlers are among the most vulnerable groups to nutritional problems when viewed from the point of view of health and nutrition problems, while at this time they are experiencing a cycle of relatively rapid growth and development. .7% is quite high where the number of births is relatively large. Researchers try to classify 10 toddlers using WEKA to find out whether they have nutritional disorders or are normal by using 5 attributes as system input and a class namely nutrition which divides this class into 4 namely bad, less, good and more with the amount of training data 219 data then data compared with the actual nutritional conditions and obtained an accuracy of 60% and an error of 40% with these results it can be concluded that the accuracy is not too good. Based on this, it is hoped that the results of this classification can help further research in classifying the nutrition of children under five.


Author(s):  
Hengyi Cai ◽  
Hongshen Chen ◽  
Yonghao Song ◽  
Xiaofang Zhao ◽  
Dawei Yin

Humans benefit from previous experiences when taking actions. Similarly, related examples from the training data also provide exemplary information for neural dialogue models when responding to a given input message. However, effectively fusing such exemplary information into dialogue generation is non-trivial: useful exemplars are required to be not only literally-similar, but also topic-related with the given context. Noisy exemplars impair the neural dialogue models understanding the conversation topics and even corrupt the response generation. To address the issues, we propose an exemplar guided neural dialogue generation model where exemplar responses are retrieved in terms of both the text similarity and the topic proximity through a two-stage exemplar retrieval model. In the first stage, a small subset of conversations is retrieved from a training set given a dialogue context. These candidate exemplars are then finely ranked regarding the topical proximity to choose the best-matched exemplar response. To further induce the neural dialogue generation model consulting the exemplar response and the conversation topics more faithfully, we introduce a multi-source sampling mechanism to provide the dialogue model with both local exemplary semantics and global topical guidance during decoding. Empirical evaluations on a large-scale conversation dataset show that the proposed approach significantly outperforms the state-of-the-art in terms of both the quantitative metrics and human evaluations.


Author(s):  
Søren Ager Meldgaard ◽  
Jonas Köhler ◽  
Henrik Lund Mortensen ◽  
Mads-Peter Verner Christiansen ◽  
Frank Noé ◽  
...  

Abstract Chemical space is routinely explored by machine learning methods to discover interesting molecules, before time-consuming experimental synthesizing is attempted. However, these methods often rely on a graph representation, ignoring 3D information necessary for determining the stability of the molecules. We propose a reinforcement learning approach for generating molecules in cartesian coordinates allowing for quantum chemical prediction of the stability. To improve sample-efficiency we learn basic chemical rules from imitation learning on the GDB-11 database to create an initial model applicable for all stoichiometries. We then deploy multiple copies of the model conditioned on a specific stoichiometry in a reinforcement learning setting. The models correctly identify low energy molecules in the database and produce novel isomers not found in the training set. Finally, we apply the model to larger molecules to show how reinforcement learning further refines the imitation learning model in domains far from the training data.


2016 ◽  
Vol 2 ◽  
pp. e79 ◽  
Author(s):  
Naga Durga Prasad Avirneni ◽  
Prem Kumar Ramesh ◽  
Arun K. Somani

Timing Speculation (TS) is a widely known method for realizing better-than-worst-case systems. Aggressive clocking, realizable by TS, enable systems to operate beyond specified safe frequency limits to effectively exploit the data dependent circuit delay. However, the range of aggressive clocking for performance enhancement under TS is restricted by short paths. In this paper, we show that increasing the lengths of short paths of the circuit increases the effectiveness of TS, leading to performance improvement. Also, we propose an algorithm to efficiently add delay buffers to selected short paths while keeping down the area penalty. We present our algorithm results for ISCAS-85 suite and show that it is possible to increase the circuit contamination delay by up to 30% without affecting the propagation delay. We also explore the possibility of increasing short path delays further by relaxing the constraint on propagation delay and analyze the performance impact.


Author(s):  
Ainhoa Serna ◽  
Jon Kepa Gerrikagoitia

In recent years, digital technology and research methods have developed natural language processing for better understanding consumers and what they share in social media. There are hardly any studies in transportation analysis with TripAdvisor, and moreover, there is not a complete analysis from the point of view of sentiment analysis. The aim of study is to investigate and discover the presence of sustainable transport modes underlying in non-categorized TripAdvisor texts, such as walking mobility in order to impact positively in public services and businesses. The methodology follows a quantitative and qualitative approach based on knowledge discovery techniques. Thus, data gathering, normalization, classification, polarity analysis, and labelling tasks have been carried out to obtain sentiment labelled training data set in the transport domain as a valuable contribution for predictive analytics. This research has allowed the authors to discover sustainable transport modes underlying the texts, focused on walking mobility but extensible to other means of transport and social media sources.


Sign in / Sign up

Export Citation Format

Share Document