scholarly journals ACP-MHCNN: an accurate multi-headed deep-convolutional neural network to predict anticancer peptides

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sajid Ahmed ◽  
Rafsanjani Muhammod ◽  
Zahid Hossain Khan ◽  
Sheikh Adilina ◽  
Alok Sharma ◽  
...  

AbstractAlthough advancing the therapeutic alternatives for treating deadly cancers has gained much attention globally, still the primary methods such as chemotherapy have significant downsides and low specificity. Most recently, Anticancer peptides (ACPs) have emerged as a potential alternative to therapeutic alternatives with much fewer negative side-effects. However, the identification of ACPs through wet-lab experiments is expensive and time-consuming. Hence, computational methods have emerged as viable alternatives. During the past few years, several computational ACP identification techniques using hand-engineered features have been proposed to solve this problem. In this study, we propose a new multi headed deep convolutional neural network model called ACP-MHCNN, for extracting and combining discriminative features from different information sources in an interactive way. Our model extracts sequence, physicochemical, and evolutionary based features for ACP identification using different numerical peptide representations while restraining parameter overhead. It is evident through rigorous experiments using cross-validation and independent-dataset that ACP-MHCNN outperforms other models for anticancer peptide identification by a substantial margin on our employed benchmarks. ACP-MHCNN outperforms state-of-the-art model by 6.3%, 8.6%, 3.7%, 4.0%, and 0.20 in terms of accuracy, sensitivity, specificity, precision, and MCC respectively. ACP-MHCNN and its relevant codes and datasets are publicly available at: https://github.com/mrzResearchArena/Anticancer-Peptides-CNN. ACP-MHCNN is also publicly available as an online predictor at: https://anticancer.pythonanywhere.com/.

2020 ◽  
Author(s):  
Sajid Ahmed ◽  
Rafsanjani Muhammod ◽  
Sheikh Adilina ◽  
Zahid Hossain Khan ◽  
Swakkhar Shatabda ◽  
...  

AbstractAlthough advancing the therapeutic alternatives for treating deadly cancers has gained much attention globally, still the primary methods such as chemotherapy have significant downsides and low specificity. Most recently, Anticancer peptides (ACPs) have emerged as a potential alternative to therapeutic alternatives with much fewer negative side-effects. However, the identification of ACPs through wet-lab experiments is expensive and time-consuming. Hence, computational methods have emerged as viable alternatives. During the past few years, several computational ACP identification techniques using hand-engineered features have been proposed to solve this problem. In this study, we propose a new multi headed deep convolutional neural network model called ACP-MHCNN, for extracting and combining discriminative features from different information sources in an interactive way. Our model extracts sequence, physicochemical, and evolutionary based features for ACP identification through simultaneous interaction with different numerical peptide representations while restraining parameter overhead. It is evident through rigorous experiments using cross-validation and independent-dataset that ACP-MHCNN outperforms other models for anticancer peptide identification by a substantial margin. ACP-MHCNN outperforms state-of-the-art model by 6.3%, 8.6%, 3.7%, 4.0%, and 0.20 in terms of accuracy, sensitivity, specificity, precision, and MCC respectively. ACP-MHCNN and its relevant codes and datasets are publicly available at: https://github.com/mrzResearchArena/Anticancer-Peptides-CNN.


2021 ◽  
Vol 40 (1) ◽  
Author(s):  
Tuomas Koskinen ◽  
Iikka Virkkunen ◽  
Oskar Siljama ◽  
Oskari Jessen-Juhler

AbstractPrevious research (Li et al., Understanding the disharmony between dropout and batch normalization by variance shift. CoRR abs/1801.05134 (2018). http://arxiv.org/abs/1801.05134arXiv:1801.05134) has shown the plausibility of using a modern deep convolutional neural network to detect flaws from phased-array ultrasonic data. This brings the repeatability and effectiveness of automated systems to complex ultrasonic signal evaluation, previously done exclusively by human inspectors. The major breakthrough was to use virtual flaws to generate ample flaw data for the teaching of the algorithm. This enabled the use of raw ultrasonic scan data for detection and to leverage some of the approaches used in machine learning for image recognition. Unlike traditional image recognition, training data for ultrasonic inspection is scarce. While virtual flaws allow us to broaden the data considerably, original flaws with proper flaw-size distribution are still required. This is of course the same for training human inspectors. The training of human inspectors is usually done with easily manufacturable flaws such as side-drilled holes and EDM notches. While the difference between these easily manufactured artificial flaws and real flaws is obvious, human inspectors still manage to train with them and perform well in real inspection scenarios. In the present work, we use a modern, deep convolutional neural network to detect flaws from phased-array ultrasonic data and compare the results achieved from different training data obtained from various artificial flaws. The model demonstrated good generalization capability toward flaw sizes larger than the original training data, and the effect of the minimum flaw size in the data set affects the $$a_{90/95}$$ a 90 / 95 value. This work also demonstrates how different artificial flaws, solidification cracks, EDM notch and simple simulated flaws generalize differently.


2020 ◽  
pp. bjophthalmol-2020-316526
Author(s):  
Yo-Ping Huang ◽  
Haobijam Basanta ◽  
Eugene Yu-Chuan Kang ◽  
Kuan-Jen Chen ◽  
Yih-Shiou Hwang ◽  
...  

Background/AimTo automatically detect and classify the early stages of retinopathy of prematurity (ROP) using a deep convolutional neural network (CNN).MethodsThis retrospective cross-sectional study was conducted in a referral medical centre in Taiwan. Only premature infants with no ROP, stage 1 ROP or stage 2 ROP were enrolled. Overall, 11 372 retinal fundus images were compiled and split into 10 235 images (90%) for training, 1137 (10%) for validation and 244 for testing. A deep CNN was implemented to classify images according to the ROP stage. Data were collected from December 17, 2013 to May 24, 2019 and analysed from December 2018 to January 2020. The metrics of sensitivity, specificity and area under the receiver operating characteristic curve were adopted to evaluate the performance of the algorithm relative to the reference standard diagnosis.ResultsThe model was trained using fivefold cross-validation, yielding an average accuracy of 99.93%±0.03 during training and 92.23%±1.39 during testing. The sensitivity and specificity scores of the model were 96.14%±0.87 and 95.95%±0.48, 91.82%±2.03 and 94.50%±0.71, and 89.81%±1.82 and 98.99%±0.40 when predicting no ROP versus ROP, stage 1 ROP versus no ROP and stage 2 ROP, and stage 2 ROP versus no ROP and stage 1 ROP, respectively.ConclusionsThe proposed system can accurately differentiate among ROP early stages and has the potential to help ophthalmologists classify ROP at an early stage.


2021 ◽  
Vol 09 (11) ◽  
pp. E1778-E1784
Author(s):  
Daniel J. Low ◽  
Zhuoqiao Hong ◽  
Rishad Khan ◽  
Rishi Bansal ◽  
Nikko Gimpaya ◽  
...  

Abstract Background and study aims Colonoscopy completion reduces post-colonoscopy colorectal cancer. As a result, there have been attempts at implementing artificial intelligence to automate the detection of the appendiceal orifice (AO) for quality assurance. However, the utilization of these algorithms has not been demonstrated in suboptimal conditions, including variable bowel preparation. We present an automated computer-assisted method using a deep convolutional neural network to detect the AO irrespective of bowel preparation. Methods A total of 13,222 images (6,663 AO and 1,322 non-AO) were extracted from 35 colonoscopy videos recorded between 2015 and 2018. The images were labelled with Boston Bowel Preparation Scale scores. A total of 11,900 images were used for training/validation and 1,322 for testing. We developed a convolutional neural network (CNN) with a DenseNet architecture pre-trained on ImageNet as a feature extractor on our data and trained a classifier uniquely tailored for identification of AO and non-AO images using binary cross entropy loss. Results The deep convolutional neural network was able to correctly classify the AO and non-AO images with an accuracy of 94 %. The area under the receiver operating curve of this neural network was 0.98. The sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm were 0.96, 0.92, 0.92 and 0.96, respectively. AO detection was > 95 % regardless of BBPS scores, while non-AO detection improved from BBPS 1 score (83.95 %) to BBPS 3 score (98.28 %). Conclusions A deep convolutional neural network was created demonstrating excellent discrimination between AO from non-AO images despite variable bowel preparation. This algorithm will require further testing to ascertain its effectiveness in real-time colonoscopy.


Endoscopy ◽  
2020 ◽  
Author(s):  
Atsuo Yamada ◽  
Ryota Niikura ◽  
Keita Otani ◽  
Tomonori Aoki ◽  
Kazuhiko Koike

Abstract Background Although colorectal neoplasms are the most common abnormalities found in colon capsule endoscopy (CCE), no computer-aided detection method is yet available. We developed an artificial intelligence (AI) system that uses deep learning to automatically detect such lesions in CCE images. Methods We trained a deep convolutional neural network system based on a Single Shot MultiBox Detector using 15 933 CCE images of colorectal neoplasms, such as polyps and cancers. We assessed performance by calculating areas under the receiver operating characteristic curves, along with sensitivities, specificities, and accuracies, using an independent test set of 4784 images, including 1850 images of colorectal neoplasms and 2934 normal colon images. Results The area under the curve for detection of colorectal neoplasia by the AI model was 0.902. The sensitivity, specificity, and accuracy were 79.0 %, 87.0 %, and 83.9 %, respectively, at a probability cutoff of 0.348. Conclusions We developed and validated a new AI-based system that automatically detects colorectal neoplasms in CCE images.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document