scholarly journals Noncoding RNAs and Deep Learning Neural Network Discriminate Multi-Cancer Types

Cancers ◽  
2022 ◽  
Vol 14 (2) ◽  
pp. 352
Author(s):  
Anyou Wang ◽  
Rong Hai ◽  
Paul J. Rider ◽  
Qianchuan He

Detecting cancers at early stages can dramatically reduce mortality rates. Therefore, practical cancer screening at the population level is needed. To develop a comprehensive detection system to classify multiple cancer types. We integrated an artificial intelligence deep learning neural network and noncoding RNA biomarkers selected from massive data. Our system can accurately detect cancer vs. healthy objects with 96.3% of AUC of ROC (Area Under Curve of a Receiver Operating Characteristic curve), and it surprisingly reaches 78.77% of AUC when validated by real-world raw data from a completely independent data set. Even validating with raw exosome data from blood, our system can reach 72% of AUC. Moreover, our system significantly outperforms conventional machine learning models, such as random forest. Intriguingly, with no more than six biomarkers, our approach can easily discriminate any individual cancer type vs. normal with 99% to 100% AUC. Furthermore, a comprehensive marker panel can simultaneously multi-classify common cancers with a stable 82.15% accuracy rate for heterogeneous cancerous tissues and conditions.: This detection system provides a promising practical framework for automatic cancer screening at population level. Key points: (1) We developed a practical cancer screening system, which is simple, accurate, affordable, and easy to operate. (2) Our system binarily classify cancers vs. normal with >96% AUC. (3) In total, 26 individual cancer types can be easily detected by our system with 99 to 100% AUC. (4) The system can detect multiple cancer types simultaneously with >82% accuracy.

2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 10561-10561
Author(s):  
Linhao Xu ◽  
Jun Wang ◽  
Weifeng Ma ◽  
Xin Liu ◽  
Sihui Li ◽  
...  

10561 Background: Early detection at the localized stage is pivotal for the successful treatment of various cancer types. Although several cancers already have routine screening approaches, the comprehensive utilities are impeded for various reasons, e.g., low accuracy, high cost, limited availability of required facilities, especially in the developing countries. Therefore, an accurate, cost-effective, and non-invasive test for multiple major cancer screening is in high demand. We previously reported a cfDNA methylation test, which can detect five major cancer types with high specificity and sensitivity, especially at the early stage (stage I). These five major cancers, including lung cancer (LC), breast cancer (BC), colorectal cancer (CRC), gastric cancer (GC), and esophageal cancer (EC), account for 56% of new cancer cases and 60% of cancer-related deaths yearly in China. Here, we report the result in an independent cohort as a further validation of this multi-cancer screening test. Methods: The high-throughput targeted methylation profiling platform, Aurora, was used to analyze the plasma samples from an independent retrospective cohort containing 505 healthy controls and ̃200 cases for each cancer type. A locked model based on our previous pilot study (reported in AACR 2020 and 2021) was applied to this data set to assess the overall performance. Results: The Area Under Curves (AUC) of the classifier for LC, BC, CRC, GC and EC are 97.3%, 96.2%, 92.0%, 94.0% and 93.5%, respectively. At a fixed specificity of 99%, the sensitivities for LC, BC, CRC, GC and EC are 84%, 75%, 82%, 85% and 78%, respectively. Conclusions: A methylation blood test for five major cancer screening has been validated in a large retrospective cohort. Its high sensitivity for each cancer type, especially at the early stage (stage I), and easy to use suggests it can be implemented in real clinical world. A large prospective clinical trial is undergoing to further validate this test in asymptomatic populations.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
P. Shane Crawford ◽  
Mohammad A. Al-Zarrad ◽  
Andrew J. Graettinger ◽  
Alexander M. Hainen ◽  
Edward Back ◽  
...  

Infrastructure vulnerability has drawn significant attention in recent years, partly because of the occurrence of low-probability and high-consequence disruptive events such as 2017 hurricanes Harvey, Irma, and Maria, 2011 Tuscaloosa and Joplin tornadoes, and 2015 Gorkha, Nepal, and 2017 Central Mexico earthquakes. Civil infrastructure systems support social welfare, thus viability and sustained operation is critical. A variety of frameworks, models, and tools exist for advancing infrastructure vulnerability research. Nevertheless, providing accurate vulnerability measurement remains challenging. This paper presents a state-of-the-art data collection and information extraction methodology to document infrastructure at high granularity to assess preevent vulnerability and postevent damage in the face of disasters. The methods establish a baseline of preevent infrastructure functionality that can be used to measure impacts and temporal recovery following a disaster. The Extreme Events Web Viewer (EEWV) presented as part of the methodology is a GIS-based web repository storing spatial and temporal data describing communities before and after disasters and facilitating data analysis techniques. This web platform can store multiple geolocated data formats including photographs and 360° videos. A tool for automated extraction of photography from 360° video data at locations of interest specified in the EEWV was created to streamline data utility. The extracted imagery provides a manageable data set to efficiently document characteristics of the built and natural environment. The methodology was tested to locate buildings vulnerable to flood and storm surge on Dauphin Island, Alabama. Approximately 1,950 buildings were passively documented with vehicle-mounted 360° video. Extracted building images were used to train a deep learning neural network to predict whether a building was elevated or nonelevated. The model was validated, and methods for iterative neural network training are described. The methodology, from rapidly collecting large passive datasets, storing the data in an open repository, extracting manageable datasets, and obtaining information from data through deep learning, will facilitate vulnerability and postdisaster analyses as well as longitudinal recovery measurement.


Author(s):  
Yaser AbdulAali Jasim

Nowadays, technology and computer science are rapidly developing many tools and algorithms, especially in the field of artificial intelligence.  Machine learning is involved in the development of new methodologies and models that have become a novel machine learning area of applications for artificial intelligence. In addition to the architectures of conventional neural network methodologies, deep learning refers to the use of artificial neural network architectures which include multiple processing layers. In this paper, models of the Convolutional neural network were designed to detect (diagnose) plant disorders by applying samples of healthy and unhealthy plant images analyzed by means of methods of deep learning. The models were trained using an open data set containing (18,000) images of ten different plants, including healthy plants. Several model architectures have been trained to achieve the best performance of (97 percent) when the respectively [plant, disease] paired are detected. This is a very useful information or early warning technique and a method that can be further improved with the substantially high-performance rate to support an automated plant disease detection system to work in actual farm conditions.


2018 ◽  
Vol 7 (4.11) ◽  
pp. 198 ◽  
Author(s):  
Mohamad Hazim Johari ◽  
Hasliza Abu Hassan ◽  
Ahmad Ihsan Mohd Yassin ◽  
Nooritawati Md Tahir ◽  
Azlee Zabidi ◽  
...  

This project presents a method to detect diabetic retinopathy on the fundus images by using deep learning neural network. Alexnet Convolution Neural Network (CNN) has been used in the project to ease the process of neural learning. The data set used were retrieved from MESSIDOR database and it contains 1200 pieces of fundus images. The images were filtered based on the project needed.  There were 580 pieces of images types .tif has been used after filtered and those pictures were divided into 2, which is Exudates images and Normal images. On the training and testing session, the 580 mixed of exudates and normal fundus images were divided into 2 sets which is training set and testing set. The result of the training and testing set were merged into a confusion matrix. The result for this project shows that the accuracy of the CNN for training and testing set was 99.3% and 88.3% respectively.   


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Sign in / Sign up

Export Citation Format

Share Document