A Benchmark Dataset for RGB-D Sphere Based Calibration

Author(s):  
D.J.T. Boas ◽  
S. Poltaretskyi ◽  
J.-Y. Ramel ◽  
J. Chaoui ◽  
J. Berhouet ◽  
...  
Keyword(s):  
2019 ◽  
Author(s):  
Mohammad Rezaei ◽  
Yanjun Li ◽  
Xiaolin Li ◽  
Chenglong Li

<b>Introduction:</b> The ability to discriminate among ligands binding to the same protein target in terms of their relative binding affinity lies at the heart of structure-based drug design. Any improvement in the accuracy and reliability of binding affinity prediction methods decreases the discrepancy between experimental and computational results.<br><b>Objectives:</b> The primary objectives were to find the most relevant features affecting binding affinity prediction, least use of manual feature engineering, and improving the reliability of binding affinity prediction using efficient deep learning models by tuning the model hyperparameters.<br><b>Methods:</b> The binding site of target proteins was represented as a grid box around their bound ligand. Both binary and distance-dependent occupancies were examined for how an atom affects its neighbor voxels in this grid. A combination of different features including ANOLEA, ligand elements, and Arpeggio atom types were used to represent the input. An efficient convolutional neural network (CNN) architecture, DeepAtom, was developed, trained and tested on the PDBbind v2016 dataset. Additionally an extended benchmark dataset was compiled to train and evaluate the models.<br><b>Results: </b>The best DeepAtom model showed an improved accuracy in the binding affinity prediction on PDBbind core subset (Pearson’s R=0.83) and is better than the recent state-of-the-art models in this field. In addition when the DeepAtom model was trained on our proposed benchmark dataset, it yields higher correlation compared to the baseline which confirms the value of our model.<br><b>Conclusions:</b> The promising results for the predicted binding affinities is expected to pave the way for embedding deep learning models in virtual screening and rational drug design fields.


2021 ◽  
Vol 30 ◽  
pp. 2003-2015
Author(s):  
Xinda Liu ◽  
Weiqing Min ◽  
Shuhuan Mei ◽  
Lili Wang ◽  
Shuqiang Jiang

2021 ◽  
Vol 179 ◽  
pp. 108-120
Author(s):  
Weixiao Gao ◽  
Liangliang Nan ◽  
Bas Boom ◽  
Hugo Ledoux
Keyword(s):  

2021 ◽  
Vol 13 (13) ◽  
pp. 2559
Author(s):  
Daniele Cerra ◽  
Miguel Pato ◽  
Kevin Alonso ◽  
Claas Köhler ◽  
Mathias Schneider ◽  
...  

Spectral unmixing represents both an application per se and a pre-processing step for several applications involving data acquired by imaging spectrometers. However, there is still a lack of publicly available reference data sets suitable for the validation and comparison of different spectral unmixing methods. In this paper, we introduce the DLR HyperSpectral Unmixing (DLR HySU) benchmark dataset, acquired over German Aerospace Center (DLR) premises in Oberpfaffenhofen. The dataset includes airborne hyperspectral and RGB imagery of targets of different materials and sizes, complemented by simultaneous ground-based reflectance measurements. The DLR HySU benchmark allows a separate assessment of all spectral unmixing main steps: dimensionality estimation, endmember extraction (with and without pure pixel assumption), and abundance estimation. Results obtained with traditional algorithms for each of these steps are reported. To the best of our knowledge, this is the first time that real imaging spectrometer data with accurately measured targets are made available for hyperspectral unmixing experiments. The DLR HySU benchmark dataset is openly available online and the community is welcome to use it for spectral unmixing and other applications.


Author(s):  
Emiliano Spera ◽  
Antonino Furnari ◽  
Sebastiano Battiato ◽  
Giovanni Maria Farinella

2021 ◽  
pp. jfds.2021.1.074
Author(s):  
Charles Huang ◽  
Weifeng Ge ◽  
Hongsong Chou ◽  
Xin Du

2020 ◽  
Vol 10 (3) ◽  
pp. 762
Author(s):  
Erinc Merdivan ◽  
Deepika Singh ◽  
Sten Hanke ◽  
Johannes Kropf ◽  
Andreas Holzinger ◽  
...  

Conversational agents are gaining huge popularity in industrial applications such as digital assistants, chatbots, and particularly systems for natural language understanding (NLU). However, a major drawback is the unavailability of a common metric to evaluate the replies against human judgement for conversational agents. In this paper, we develop a benchmark dataset with human annotations and diverse replies that can be used to develop such metric for conversational agents. The paper introduces a high-quality human annotated movie dialogue dataset, HUMOD, that is developed from the Cornell movie dialogues dataset. This new dataset comprises 28,500 human responses from 9500 multi-turn dialogue history-reply pairs. Human responses include: (i) ratings of the dialogue reply in relevance to the dialogue history; and (ii) unique dialogue replies for each dialogue history from the users. Such unique dialogue replies enable researchers in evaluating their models against six unique human responses for each given history. Detailed analysis on how dialogues are structured and human perception on dialogue score in comparison with existing models are also presented.


Sign in / Sign up

Export Citation Format

Share Document