scholarly journals Event Time Extraction with a Decision Tree of Neural Classifiers

Author(s):  
Nils Reimers ◽  
Nazanin Dehghani ◽  
Iryna Gurevych

Extracting the information from text when an event happened is challenging. Documents do not only report on current events, but also on past events as well as on future events. Often, the relevant time information for an event is scattered across the document. In this paper we present a novel method to automatically anchor events in time. To our knowledge it is the first approach that takes temporal information from the complete document into account. We created a decision tree that applies neural network based classifiers at its nodes. We use this tree to incrementally infer, in a stepwise manner, at which time frame an event happened. We evaluate the approach on the TimeBank-EventTime Corpus (Reimers et al., 2016) achieving an accuracy of 42.0% compared to an inter-annotator agreement (IAA) of 56.7%. For events that span over a single day we observe an accuracy improvement of 33.1 points compared to the state-of-the-art CAEVO system (Chambers et al., 2014). Without retraining, we apply this model to the SemEval-2015 Task 4 on automatic timeline generation and achieve an improvement of 4.01 points F1-score compared to the state-of-the-art. Our code is publically available.

2014 ◽  
Vol 2014 ◽  
pp. 1-20 ◽  
Author(s):  
Michal Jancosek ◽  
Tomas Pajdla

We present a novel method for 3D surface reconstruction from an input cloud of 3D points augmented with visibility information. We observe that it is possible to reconstruct surfaces that do not contain input points. Instead of modeling the surface from input points, we model free space from visibility information of the input points. The complement of the modeled free space is considered full space. The surface occurs at interface between the free and the full space. We show that under certain conditions a part of the full space surrounded by the free space must contain a real object also when the real object does not contain any input points; that is, an occluder reveals itself through occlusion. Our key contribution is the proposal of a new interface classifier that can also detect the occluder interface just from the visibility of input points. We use the interface classifier to modify the state-of-the-art surface reconstruction method so that it gains the ability to reconstruct weakly supported surfaces. We evaluate proposed method on datasets augmented with different levels of noise, undersampling, and amount of outliers. We show that the proposed method outperforms other methods in accuracy and ability to reconstruct weakly supported surfaces.


Author(s):  
Pil-Ho Lee ◽  
Haseung Chung ◽  
Sang Won Lee ◽  
Jeongkon Yoo ◽  
Jeonghan Ko

This paper reviews the state-of-the-art research related to the dimensional accuracy in additive manufacturing (AM) processes. It is considered that the improvement of dimensional accuracy is one of the major scientific challenges to enhance the qualities of the products by AM. This paper analyzed the studies for commonly used AM techniques with respect to dimensional accuracy. These studies are classified by process characteristics, and relevant accuracy issues are examined. The accuracies of commercial AM machines are also listed. This paper also discusses suggestions for accuracy improvement. With the increase of the dimensional accuracy, not only the application of AM processes will diversify but also their value will increase.


Author(s):  
AprilPyone Maungmaung ◽  
Hitoshi Kiya

In this paper, we propose a novel method for protecting convolutional neural network models with a secret key set so that unauthorized users without the correct key set cannot access trained models. The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access without any noticeable overhead. We introduce three block-wise transformations with a secret key set to generate learnable transformed images: pixel shuffling, negative/positive transformation, and format-preserving Feistel-based encryption. Protected models are trained by using transformed images. The results of experiments with the CIFAR and ImageNet datasets show that the performance of a protected model was close to that of non-protected models when the key set was correct, while the accuracy severely dropped when an incorrect key set was given. The protected model was also demonstrated to be robust against various attacks. Compared with the state-of-the-art model protection with passports, the proposed method does not have any additional layers in the network, and therefore, there is no overhead during training and inference processes.


2020 ◽  
Vol 34 (05) ◽  
pp. 9122-9129
Author(s):  
Hai Wan ◽  
Yufei Yang ◽  
Jianfeng Du ◽  
Yanan Liu ◽  
Kunxun Qi ◽  
...  

Aspect-based sentiment analysis (ABSA) aims to detect the targets (which are composed by continuous words), aspects and sentiment polarities in text. Published datasets from SemEval-2015 and SemEval-2016 reveal that a sentiment polarity depends on both the target and the aspect. However, most of the existing methods consider predicting sentiment polarities from either targets or aspects but not from both, thus they easily make wrong predictions on sentiment polarities. In particular, where the target is implicit, i.e., it does not appear in the given text, the methods predicting sentiment polarities from targets do not work. To tackle these limitations in ABSA, this paper proposes a novel method for target-aspect-sentiment joint detection. It relies on a pre-trained language model and can capture the dependence on both targets and aspects for sentiment prediction. Experimental results on the SemEval-2015 and SemEval-2016 restaurant datasets show that the proposed method achieves a high performance in detecting target-aspect-sentiment triples even for the implicit target cases; moreover, it even outperforms the state-of-the-art methods for those subtasks of target-aspect-sentiment detection that they are competent to.


2020 ◽  
Vol 34 (05) ◽  
pp. 8799-8806
Author(s):  
Yuming Shang ◽  
He-Yan Huang ◽  
Xian-Ling Mao ◽  
Xin Sun ◽  
Wei Wei

The noisy labeling problem has been one of the major obstacles for distant supervised relation extraction. Existing approaches usually consider that the noisy sentences are useless and will harm the model's performance. Therefore, they mainly alleviate this problem by reducing the influence of noisy sentences, such as applying bag-level selective attention or removing noisy sentences from sentence-bags. However, the underlying cause of the noisy labeling problem is not the lack of useful information, but the missing relation labels. Intuitively, if we can allocate credible labels for noisy sentences, they will be transformed into useful training data and benefit the model's performance. Thus, in this paper, we propose a novel method for distant supervised relation extraction, which employs unsupervised deep clustering to generate reliable labels for noisy sentences. Specifically, our model contains three modules: a sentence encoder, a noise detector and a label generator. The sentence encoder is used to obtain feature representations. The noise detector detects noisy sentences from sentence-bags, and the label generator produces high-confidence relation labels for noisy sentences. Extensive experimental results demonstrate that our model outperforms the state-of-the-art baselines on a popular benchmark dataset, and can indeed alleviate the noisy labeling problem.


2000 ◽  
Vol 31 ◽  
pp. 1-13
Author(s):  
Dennison Rusinow

I am often asked, by students, colleagues, and friends, why and how I came to devote my professional life to the study of Central and Southeastern European history and current events and, in the process, how I came to spend thirty years living in the area. I have two stock answers, both of them true.


Author(s):  
Yang He ◽  
Guoliang Kang ◽  
Xuanyi Dong ◽  
Yanwei Fu ◽  
Yi Yang

This paper proposed a Soft Filter Pruning (SFP) method to accelerate the inference procedure of deep Convolutional Neural Networks (CNNs). Specifically, the proposed SFP enables the pruned filters to be updated when training the model after pruning. SFP has two advantages over previous works: (1) Larger model capacity. Updating previously pruned filters provides our approach with larger optimization space than fixing the filters to zero. Therefore, the network trained by our method has a larger model capacity to learn from the training data. (2) Less dependence on the pretrained model. Large capacity enables SFP to train from scratch and prune the model simultaneously. In contrast, previous filter pruning methods should be conducted on the basis of the pre-trained model to guarantee their performance. Empirically, SFP from scratch outperforms the previous filter pruning methods. Moreover, our approach has been demonstrated effective for many advanced CNN architectures. Notably, on ILSCRC-2012, SFP reduces more than 42% FLOPs on ResNet-101 with even 0.2% top-5 accuracy improvement, which has advanced the state-of-the-art. Code is publicly available on GitHub: https://github.com/he-y/softfilter-pruning


2013 ◽  
Vol 8 (1) ◽  
pp. 743-750
Author(s):  
Manjula Shenoy ◽  
Dr. K.C. Shet ◽  
Dr. U. Dinesh Acharya

An ontology describes and defines the terms used to describe and represent an area of knowledge. Different people or organizations come up with their own ontology; having their own view of the domain. So, for systems to interoperate, it becomes necessary to map these heterogeneous ontologies.This paper discusses the state of the art methods and outlines a new approach with improved precision and recall. Also the system finds other than 1:1 relationships. 


Author(s):  
Mengxi Jia ◽  
Yunpeng Zhai ◽  
Shijian Lu ◽  
Siwei Ma ◽  
Jian Zhang

RGB-Infrared (IR) cross-modality person re-identification (re-ID), which aims to search an IR image in RGB gallery or vice versa, is a challenging task due to the large discrepancy between IR and RGB modalities. Existing methods address this challenge typically by aligning feature distributions or image styles across modalities, whereas the very useful similarities among gallery samples of the same modality (i.e. intra-modality sample similarities) are largely neglected. This paper presents a novel similarity inference metric (SIM) that exploits the intra-modality sample similarities to circumvent the cross-modality discrepancy targeting optimal cross-modality image matching. SIM works by successive similarity graph reasoning and mutual nearest-neighbor reasoning that mine cross-modality sample similarities by leveraging intra-modality sample similarities from two different perspectives. Extensive experiments over two cross-modality re-ID datasets (SYSU-MM01 and RegDB) show that SIM achieves significant accuracy improvement but with little extra training as compared with the state-of-the-art.


2020 ◽  
Author(s):  
Van Trung Chu ◽  
Shou-Hao Chiang ◽  
Tang-Huang Lin

<p>The arm of this study to analyze the effect of landslide sample position with point-based approaches for landslide susceptibility modeling which were conducted in the hotspot of the land sliding area located downstream of Nam Ma watershed (Sin Ho, Lai Chau, Viet Nam). Seven hundred fifty-nine landslide polygons that occurred in 2018 were mapped by using google earth integrated with field survey and 84 landslide points extracted from the inventory map conducted in 2013. The state-of-the-art sampling techniques and sample partition approach were applied to produce three subsets of training and testing point-based. Such as the highest position point within landslide polygon (SUB1), the centroid of landslide polygon (SUB2) and the point at the highest position within the seed cell area of the landslide polygon (SUB3). Along with that, the optimal strategy in selecting non-landslide samples was also applied and was first explicitly introduced in this study. Besides, multiple landslide conditioning factors were considered including topographic, geomorphological and hydrological groups. Especially beside of commonly used factors such as slope, elevation, curvature, land use land cover, aspect, etc. the unusual variables also considered such as high above the nearest drainage (HAND - the state-of-the-art terrain) or time series disturbance of land surface index was the first use in this study for landslide analysis and other cutting-edge data processing were proposed in this research arming to optimize the most vital part of whole procedure. The next stage of the analysis is landslide susceptibility modeling. In order to have a more objective judgment about the main issue mentioned above, instead of using only one model, we applied three different models namely Random forest (RF), Logistic regression (LR) and Decision tree (DT) to perform three kinds of scenarios by difference subsets of landslides with five folds of training phase. Subsequently, to compare the abilities of those cases, the model performance was assessed by using the area under the receiver operating characteristic curve both in model success rate (AUCSR) and model predictive rate (AUCPR). Finally, based on the results of this study, all three models performed consistent with three scenarios means the SUB2 and SUB3 are quite similar and much higher than the contribution of SUB1. And the model ability analysis indicated that RF can obtain higher accuracy following by LR and the lowest is DT.</p><p><strong>Keywords:</strong> Sample position, Landslide Susceptibility, Logistic regression, Random forest, Decision tree, Viet Nam.</p>


Sign in / Sign up

Export Citation Format

Share Document