scholarly journals Interactive Deep Learning for Shelf Life Prediction of Muskmelons Based on an Active Learning Approach

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 414
Author(s):  
Dominique Albert-Weiss ◽  
Ahmad Osman

A pivotal topic in agriculture and food monitoring is the assessment of the quality and ripeness of agricultural products by using non-destructive testing techniques. Acoustic testing offers a rapid in situ analysis of the state of the agricultural good, obtaining global information of its interior. While deep learning (DL) methods have outperformed state-of-the-art benchmarks in various applications, the reason for lacking adaptation of DL algorithms such as convolutional neural networks (CNNs) can be traced back to its high data inefficiency and the absence of annotated data. Active learning is a framework that has been heavily used in machine learning when the labelled instances are scarce or cumbersome to obtain. This is specifically of interest when the DL algorithm is highly uncertain about the label of an instance. By allowing the human-in-the-loop for guidance, a continuous improvement of the DL algorithm based on a sample efficient manner can be obtained. This paper seeks to study the applicability of active learning when grading ‘Galia’ muskmelons based on its shelf life. We propose k-Determinantal Point Processes (k-DPP), which is a purely diversity-based method that allows to take influence on the exploration within the feature space based on the chosen subset k. While getting coequal results to uncertainty-based approaches when k is large, we simultaneously obtain a better exploration of the data distribution. While the implementation based on eigendecomposition takes up a runtime of O(n3), this can further be reduced to O(n·poly(k)) based on rejection sampling. We suggest the use of diversity-based acquisition when only a few labelled samples are available, allowing for better exploration while counteracting the disadvantage of missing the training objective in uncertainty-based methods following a greedy fashion.

2020 ◽  
Vol 34 (09) ◽  
pp. 13634-13635
Author(s):  
Kun Qian ◽  
Poornima Chozhiyath Raman ◽  
Yunyao Li ◽  
Lucian Popa

Entity name disambiguation is an important task for many text-based AI tasks. Entity names usually have internal semantic structures that are useful for resolving different variations of the same entity. We present, PARTNER, a deep learning-based interactive system for entity name understanding. Powered by effective active learning and weak supervision, PARTNER can learn deep learning-based models for identifying entity name structure with low human effort. PARTNER also allows the user to design complex normalization and variant generation functions without coding skills.


2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


2020 ◽  
pp. 1-14
Author(s):  
SHOTA OSADA

Abstract We prove the Bernoulli property for determinantal point processes on $ \mathbb{R}^d $ with translation-invariant kernels. For the determinantal point processes on $ \mathbb{Z}^d $ with translation-invariant kernels, the Bernoulli property was proved by Lyons and Steif [Stationary determinantal processes: phase multiplicity, bernoullicity, and domination. Duke Math. J.120 (2003), 515–575] and Shirai and Takahashi [Random point fields associated with certain Fredholm determinants II: fermion shifts and their ergodic properties. Ann. Probab.31 (2003), 1533–1564]. We prove its continuum version. For this purpose, we also prove the Bernoulli property for the tree representations of the determinantal point processes.


2021 ◽  
Vol 58 (2) ◽  
pp. 469-483
Author(s):  
Jesper Møller ◽  
Eliza O’Reilly

AbstractFor a determinantal point process (DPP) X with a kernel K whose spectrum is strictly less than one, André Goldman has established a coupling to its reduced Palm process $X^u$ at a point u with $K(u,u)>0$ so that, almost surely, $X^u$ is obtained by removing a finite number of points from X. We sharpen this result, assuming weaker conditions and establishing that $X^u$ can be obtained by removing at most one point from X, where we specify the distribution of the difference $\xi_u: = X\setminus X^u$. This is used to discuss the degree of repulsiveness in DPPs in terms of $\xi_u$, including Ginibre point processes and other specific parametric models for DPPs.


Author(s):  
Jack Poulson

Determinantal point processes (DPPs) were introduced by Macchi (Macchi 1975 Adv. Appl. Probab. 7 , 83–122) as a model for repulsive (fermionic) particle distributions. But their recent popularization is largely due to their usefulness for encouraging diversity in the final stage of a recommender system (Kulesza & Taskar 2012 Found. Trends Mach. Learn. 5 , 123–286). The standard sampling scheme for finite DPPs is a spectral decomposition followed by an equivalent of a randomly diagonally pivoted Cholesky factorization of an orthogonal projection, which is only applicable to Hermitian kernels and has an expensive set-up cost. Researchers Launay et al. 2018 ( http://arxiv.org/abs/1802.08429 ); Chen & Zhang 2018 NeurIPS ( https://papers.nips.cc/paper/7805-fast-greedy-map-inference-for-determinantal-point-process-to-improve-recommendation-diversity.pdf ) have begun to connect DPP sampling to LDL H factorizations as a means of avoiding the initial spectral decomposition, but existing approaches have only outperformed the spectral decomposition approach in special circumstances, where the number of kept modes is a small percentage of the ground set size. This article proves that trivial modifications of LU and LDL H factorizations yield efficient direct sampling schemes for non-Hermitian and Hermitian DPP kernels, respectively. Furthermore, it is experimentally shown that even dynamically scheduled, shared-memory parallelizations of high-performance dense and sparse-direct factorizations can be trivially modified to yield DPP sampling schemes with essentially identical performance. The software developed as part of this research, Catamari ( hodgestar.com/catamari ) is released under the Mozilla Public License v.2.0. It contains header-only, C++14 plus OpenMP 4.0 implementations of dense and sparse-direct, Hermitian and non-Hermitian DPP samplers. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


Sign in / Sign up

Export Citation Format

Share Document