determinantal point process
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 16)

H-INDEX

6
(FIVE YEARS 2)

2021 ◽  
Vol 71 ◽  
pp. 371-399
Author(s):  
Laura Perez-Beltrachini ◽  
Mirella Lapata

The ability to convey relevant and diverse information is critical in multi-document summarization and yet remains elusive for neural seq-to-seq models whose outputs are often redundant and fail to correctly cover important details. In this work, we propose an attention mechanism which encourages greater focus on relevance and diversity. Attention weights are computed based on (proportional) probabilities given by Determinantal Point Processes (DPPs) defined on the set of content units to be summarized. DPPs have been successfully used in extractive summarisation, here we use them to select relevant and diverse content for neural abstractive summarisation. We integrate DPP-based attention with various seq-to-seq architectures ranging from CNNs to LSTMs, and Transformers. Experimental evaluation shows that our attention mechanism consistently improves summarization and delivers performance comparable with the state-of-the-art on the MultiNews dataset


2021 ◽  
Vol 46 ◽  
pp. 101292
Author(s):  
Ashraf Bsebsu ◽  
Gan Zheng ◽  
Sangarapillai Lambotharan

2021 ◽  
Vol 58 (2) ◽  
pp. 469-483
Author(s):  
Jesper Møller ◽  
Eliza O’Reilly

AbstractFor a determinantal point process (DPP) X with a kernel K whose spectrum is strictly less than one, André Goldman has established a coupling to its reduced Palm process $X^u$ at a point u with $K(u,u)>0$ so that, almost surely, $X^u$ is obtained by removing a finite number of points from X. We sharpen this result, assuming weaker conditions and establishing that $X^u$ can be obtained by removing at most one point from X, where we specify the distribution of the difference $\xi_u: = X\setminus X^u$. This is used to discuss the degree of repulsiveness in DPPs in terms of $\xi_u$, including Ginibre point processes and other specific parametric models for DPPs.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1846
Author(s):  
Mohsen Saffari ◽  
Mahdi Khodayar ◽  
Mohammad Saeed Ebrahimi Saadabadi ◽  
Ana F. Sequeira ◽  
Jaime S. Cardoso

In recent years, deep neural networks have shown significant progress in computer vision due to their large generalization capacity; however, the overfitting problem ubiquitously threatens the learning process of these highly nonlinear architectures. Dropout is a recent solution to mitigate overfitting that has witnessed significant success in various classification applications. Recently, many efforts have been made to improve the Standard dropout using an unsupervised merit-based semantic selection of neurons in the latent space. However, these studies do not consider the task-relevant information quality and quantity and the diversity of the latent kernels. To solve the challenge of dropping less informative neurons in deep learning, we propose an efficient end-to-end dropout algorithm that selects the most informative neurons with the highest correlation with the target output considering the sparsity in its selection procedure. First, to promote activation diversity, we devise an approach to select the most diverse set of neurons by making use of determinantal point process (DPP) sampling. Furthermore, to incorporate task specificity into deep latent features, a mutual information (MI)-based merit function is developed. Leveraging the proposed MI with DPP sampling, we introduce the novel DPPMI dropout that adaptively adjusts the retention rate of neurons based on their contribution to the neural network task. Empirical studies on real-world classification benchmarks including, MNIST, SVHN, CIFAR10, CIFAR100, demonstrate the superiority of our proposed method over recent state-of-the-art dropout algorithms in the literature.


2020 ◽  
Vol 34 (04) ◽  
pp. 4932-4939
Author(s):  
Yong Liu ◽  
Yingtai Xiao ◽  
Qiong Wu ◽  
Chunyan Miao ◽  
Juyong Zhang ◽  
...  

Interactive recommender systems that enable the interactions between users and the recommender system have attracted increasing research attention. Previous methods mainly focus on optimizing recommendation accuracy. However, they usually ignore the diversity of the recommendation results, thus usually results in unsatisfying user experiences. In this paper, we propose a novel diversified recommendation model, named Diversified Contextual Combinatorial Bandit (DC2B), for interactive recommendation with users' implicit feedback. Specifically, DC2B employs determinantal point process in the recommendation procedure to promote diversity of the recommendation results. To learn the model parameters, a Thompson sampling-type algorithm based on variational Bayesian inference is proposed. In addition, theoretical regret analysis is also provided to guarantee the performance of DC2B. Extensive experiments on real datasets are performed to demonstrate the effectiveness of the proposed method in balancing the recommendation accuracy and diversity.


2020 ◽  
Vol 15 (1) ◽  
pp. 187-214 ◽  
Author(s):  
Ilaria Bianchini ◽  
Alessandra Guglielmi ◽  
Fernando A. Quintana

Author(s):  
Jack Poulson

Determinantal point processes (DPPs) were introduced by Macchi (Macchi 1975 Adv. Appl. Probab. 7 , 83–122) as a model for repulsive (fermionic) particle distributions. But their recent popularization is largely due to their usefulness for encouraging diversity in the final stage of a recommender system (Kulesza & Taskar 2012 Found. Trends Mach. Learn. 5 , 123–286). The standard sampling scheme for finite DPPs is a spectral decomposition followed by an equivalent of a randomly diagonally pivoted Cholesky factorization of an orthogonal projection, which is only applicable to Hermitian kernels and has an expensive set-up cost. Researchers Launay et al. 2018 ( http://arxiv.org/abs/1802.08429 ); Chen & Zhang 2018 NeurIPS ( https://papers.nips.cc/paper/7805-fast-greedy-map-inference-for-determinantal-point-process-to-improve-recommendation-diversity.pdf ) have begun to connect DPP sampling to LDL H factorizations as a means of avoiding the initial spectral decomposition, but existing approaches have only outperformed the spectral decomposition approach in special circumstances, where the number of kept modes is a small percentage of the ground set size. This article proves that trivial modifications of LU and LDL H factorizations yield efficient direct sampling schemes for non-Hermitian and Hermitian DPP kernels, respectively. Furthermore, it is experimentally shown that even dynamically scheduled, shared-memory parallelizations of high-performance dense and sparse-direct factorizations can be trivially modified to yield DPP sampling schemes with essentially identical performance. The software developed as part of this research, Catamari ( hodgestar.com/catamari ) is released under the Mozilla Public License v.2.0. It contains header-only, C++14 plus OpenMP 4.0 implementations of dense and sparse-direct, Hermitian and non-Hermitian DPP samplers. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


2020 ◽  
Vol 24 ◽  
pp. 227-243
Author(s):  
Adrien Clarenne

We consider a collection of weighted Euclidian random balls in ℝd distributed according a determinantal point process. We perform a zoom-out procedure by shrinking the radii while increasing the number of balls. We observe that the repulsion between the balls is erased and three different regimes are obtained, the same as in the weighted Poissonian case.


Sign in / Sign up

Export Citation Format

Share Document