scholarly journals Multi-Branch-CNN: classification of ion channel interacting peptides using parallel convolutional neural networks

2021 ◽  
Author(s):  
Jielu Yan ◽  
Bob Zhang ◽  
Mingliang Zhou ◽  
Hang Fai Kwok ◽  
Shirley W.I. Siu

Ligand peptides that have high affinity for ion channels are critical for regulating ion flux across the plasma membrane. These peptides are now being considered as potential drug candidates for many diseases, such as cardiovascular disease and cancers. There are several studies to identify ion channel interacting peptides computationally, but, to the best of our knowledge, none of them published available tools for prediction. To provide a solution, we present Multi-branch-CNN, a parallel convolutional neural networks (CNNs) method for identifying three types of ion channel peptide binders (sodium, potassium, and calcium). Our experiment shows that the Multi-Branch-CNN method performs comparably to thirteen traditional ML algorithms (TML13) on the test sets of three ion channels. To evaluate the predictive power of our method with respect to novel sequences, as is the case in real-world applications, we created an additional test set for each ion channel, called the novel-test set, which has little or no similarities to the sequences in either the sequences of the train set or the test set. In the novel-test experiment, Multi-Branch-CNN performs significantly better than TML13, showing an improvement in accuracy of 6%, 14%, and 15% for sodium, potassium, and calcium channels, respectively. We confirmed the effectiveness of Multi-Branch-CNN by comparing it to the standard CNN method with one input branch (Single-Branch-CNN) and an ensemble method (TML13-Stack). To facilitate applications, the data sets, script files to reproduce the experiments, and the final predictive models are freely available at https://github.com/jieluyan/Multi-Branch-CNN.

Author(s):  
Titus Josef Brinker ◽  
Achim Hekler ◽  
Jochen Sven Utikal ◽  
Dirk Schadendorf ◽  
Carola Berking ◽  
...  

BACKGROUND State-of-the-art classifiers based on convolutional neural networks (CNNs) generally outperform the diagnosis of dermatologists and could enable life-saving and fast diagnoses, even outside the hospital via installation on mobile devices. To our knowledge, at present, there is no review of the current work in this research area. OBJECTIVE This study presents the first systematic review of the state-of-the-art research on classifying skin lesions with CNNs. We limit our review to skin lesion classifiers. In particular, methods that apply a CNN only for segmentation or for the classification of dermoscopic patterns are not considered here. Furthermore, this study discusses why the comparability of the presented procedures is very difficult and which challenges must be addressed in the future. METHODS We searched the Google Scholar, PubMed, Medline, Science Direct, and Web of Science databases for systematic reviews and original research articles published in English. Only papers that reported sufficient scientific proceedings are included in this review. RESULTS We found 13 papers that classified skin lesions using CNNs. In principle, classification methods can be differentiated according to three principles. Approaches that use a CNN already trained by means of another large data set and then optimize its parameters to the classification of skin lesions are both the most common methods as well as display the best performance with the currently available limited data sets. CONCLUSIONS CNNs display a high performance as state-of-the-art skin lesion classifiers. Unfortunately, it is difficult to compare different classification methods because some approaches use non-public data sets for training and/or testing, thereby making reproducibility difficult.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Adam Goodwin ◽  
Sanket Padmanabhan ◽  
Sanchit Hira ◽  
Margaret Glancey ◽  
Monet Slinowsky ◽  
...  

AbstractWith over 3500 mosquito species described, accurate species identification of the few implicated in disease transmission is critical to mosquito borne disease mitigation. Yet this task is hindered by limited global taxonomic expertise and specimen damage consistent across common capture methods. Convolutional neural networks (CNNs) are promising with limited sets of species, but image database requirements restrict practical implementation. Using an image database of 2696 specimens from 67 mosquito species, we address the practical open-set problem with a detection algorithm for novel species. Closed-set classification of 16 known species achieved 97.04 ± 0.87% accuracy independently, and 89.07 ± 5.58% when cascaded with novelty detection. Closed-set classification of 39 species produces a macro F1-score of 86.07 ± 1.81%. This demonstrates an accurate, scalable, and practical computer vision solution to identify wild-caught mosquitoes for implementation in biosurveillance and targeted vector control programs, without the need for extensive image database development for each new target region.


Sign in / Sign up

Export Citation Format

Share Document