performance improvement
Recently Published Documents


TOTAL DOCUMENTS

10830
(FIVE YEARS 2809)

H-INDEX

88
(FIVE YEARS 18)

2022 ◽  
Vol 2 (1) ◽  
pp. 1-29
Author(s):  
Sukrit Mittal ◽  
Dhish Kumar Saxena ◽  
Kalyanmoy Deb ◽  
Erik D. Goodman

Learning effective problem information from already explored search space in an optimization run, and utilizing it to improve the convergence of subsequent solutions, have represented important directions in Evolutionary Multi-objective Optimization (EMO) research. In this article, a machine learning (ML)-assisted approach is proposed that: (a) maps the solutions from earlier generations of an EMO run to the current non-dominated solutions in the decision space ; (b) learns the salient patterns in the mapping using an ML method, here an artificial neural network (ANN); and (c) uses the learned ML model to advance some of the subsequent offspring solutions in an adaptive manner. Such a multi-pronged approach, quite different from the popular surrogate-modeling methods, leads to what is here referred to as the Innovized Progress (IP) operator. On several test and engineering problems involving two and three objectives, with and without constraints, it is shown that an EMO algorithm assisted by the IP operator offers faster convergence behavior, compared to its base version independent of the IP operator. The results are encouraging, pave a new path for the performance improvement of EMO algorithms, and set the motivation for further exploration on more challenging problems.


2022 ◽  
Vol 40 (3) ◽  
pp. 1-24
Author(s):  
Jiaul H. Paik ◽  
Yash Agrawal ◽  
Sahil Rishi ◽  
Vaishal Shah

Existing probabilistic retrieval models do not restrict the domain of the random variables that they deal with. In this article, we show that the upper bound of the normalized term frequency ( tf ) from the relevant documents is much smaller than the upper bound of the normalized tf from the whole collection. As a result, the existing models suffer from two major problems: (i) the domain mismatch causes data modeling error, (ii) since the outliers have very large magnitude and the retrieval models follow tf hypothesis, the combination of these two factors tends to overestimate the relevance score. In an attempt to address these problems, we propose novel weighted probabilistic models based on truncated distributions. We evaluate our models on a set of large document collections. Significant performance improvement over six existing probabilistic models is demonstrated.


Author(s):  
Kashif Munir ◽  
Hongxiao Bai ◽  
Hai Zhao ◽  
Junhan Zhao

Implicit discourse relation recognition is a challenging task due to the absence of the necessary informative clues from explicit connectives. An implicit discourse relation recognizer has to carefully tackle the semantic similarity of sentence pairs and the severe data sparsity issue. In this article, we learn token embeddings to encode the structure of a sentence from a dependency point of view in their representations and use them to initialize a baseline model to make it really strong. Then, we propose a novel memory component to tackle the data sparsity issue by allowing the model to master the entire training set, which helps in achieving further performance improvement. The memory mechanism adequately memorizes information by pairing representations and discourse relations of all training instances, thus filling the slot of the data-hungry issue in the current implicit discourse relation recognizer. The proposed memory component, if attached with any suitable baseline, can help in performance enhancement. The experiments show that our full model with memorizing the entire training data provides excellent results on PDTB and CDTB datasets, outperforming the baselines by a fair margin.


2022 ◽  
Vol 204 ◽  
pp. 111961
Author(s):  
Sara Ranjbari ◽  
Ali Ayati ◽  
Bahareh Tanhaei ◽  
Amani Al-Othman ◽  
Fatemeh Karimi

Sign in / Sign up

Export Citation Format

Share Document