scholarly journals Establishment and validation of a computer-assisted colonic polyp localization system based on deep learning

2021 ◽  
Vol 27 (31) ◽  
pp. 5232-5246
Author(s):  
Sheng-Bing Zhao ◽  
Wei Yang ◽  
Shu-Ling Wang ◽  
Peng Pan ◽  
Run-Dong Wang ◽  
...  
BioChem ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 36-48
Author(s):  
Ivan Jacobs ◽  
Manolis Maragoudakis

Computer-assisted de novo design of natural product mimetics offers a viable strategy to reduce synthetic efforts and obtain natural-product-inspired bioactive small molecules, but suffers from several limitations. Deep learning techniques can help address these shortcomings. We propose the generation of synthetic molecule structures that optimizes the binding affinity to a target. To achieve this, we leverage important advancements in deep learning. Our approach generalizes to systems beyond the source system and achieves the generation of complete structures that optimize the binding to a target unseen during training. Translating the input sub-systems into the latent space permits the ability to search for similar structures, and the sampling from the latent space for generation.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 967
Author(s):  
Amirreza Mahbod ◽  
Gerald Schaefer ◽  
Christine Löw ◽  
Georg Dorffner ◽  
Rupert Ecker ◽  
...  

Nuclei instance segmentation can be considered as a key point in the computer-mediated analysis of histological fluorescence-stained (FS) images. Many computer-assisted approaches have been proposed for this task, and among them, supervised deep learning (DL) methods deliver the best performances. An important criterion that can affect the DL-based nuclei instance segmentation performance of FS images is the utilised image bit depth, but to our knowledge, no study has been conducted so far to investigate this impact. In this work, we released a fully annotated FS histological image dataset of nuclei at different image magnifications and from five different mouse organs. Moreover, by different pre-processing techniques and using one of the state-of-the-art DL-based methods, we investigated the impact of image bit depth (i.e., eight bits vs. sixteen bits) on the nuclei instance segmentation performance. The results obtained from our dataset and another publicly available dataset showed very competitive nuclei instance segmentation performances for the models trained with 8 bit and 16 bit images. This suggested that processing 8 bit images is sufficient for nuclei instance segmentation of FS images in most cases. The dataset including the raw image patches, as well as the corresponding segmentation masks is publicly available in the published GitHub repository.


2022 ◽  
Author(s):  
Maede Maftouni ◽  
Bo Shen ◽  
Andrew Chung Chee Law ◽  
Niloofar Ayoobi Yazdi ◽  
Zhenyu Kong

<p>The global extent of COVID-19 mutations and the consequent depletion of hospital resources highlighted the necessity of effective computer-assisted medical diagnosis. COVID-19 detection mediated by deep learning models can help diagnose this highly contagious disease and lower infectivity and mortality rates. Computed tomography (CT) is the preferred imaging modality for building automatic COVID-19 screening and diagnosis models. It is well-known that the training set size significantly impacts the performance and generalization of deep learning models. However, accessing a large dataset of CT scan images from an emerging disease like COVID-19 is challenging. Therefore, data efficiency becomes a significant factor in choosing a learning model. To this end, we present a multi-task learning approach, namely, a mask-guided attention (MGA) classifier, to improve the generalization and data efficiency of COVID-19 classification on lung CT scan images.</p><p>The novelty of this method is compensating for the scarcity of data by employing more supervision with lesion masks, increasing the sensitivity of the model to COVID-19 manifestations, and helping both generalization and classification performance. Our proposed model achieves better overall performance than the single-task baseline and state-of-the-art models, as measured by various popular metrics. In our experiment with different percentages of data from our curated dataset, the classification performance gain from this multi-task learning approach is more significant for the smaller training sizes. Furthermore, experimental results demonstrate that our method enhances the focus on the lesions, as witnessed by both</p><p>attention and attribution maps, resulting in a more interpretable model.</p>


2020 ◽  
Vol 17 (2) ◽  
pp. 172988142092163
Author(s):  
Tianyi Li ◽  
Yuhan Qian ◽  
Arnaud de La Fortelle ◽  
Ching-Yao Chan ◽  
Chunxiang Wang

This article presents a lane-level localization system adaptive to different driving conditions, such as occlusions, complicated road structures, and lane-changing maneuvers. The system uses surround-view cameras, other low-cost sensors, and a lane-level road map which suits for mass deployment. A map-matching localizer is proposed to estimate the probabilistic lateral position. It consists of a sub-map extraction module, a perceptual model, and a matching model. A probabilistic lateral road feature is devised as a sub-map without limitations of road structures. The perceptual model is a deep learning network that processes raw images from surround-view cameras to extract a local probabilistic lateral road feature. Unlike conventional deep-learning-based methods, the perceptual model is trained by auto-generated labels from the lane-level map to reduce manual effort. The matching model computes the correlation between the sub-map and the local probabilistic lateral road feature to output the probabilistic lateral estimation. A particle-filter-based framework is developed to fuse the output of map-matching localizer with the measurements from wheel speed sensors and an inertial measurement unit. Experimental results demonstrate that the proposed system provides the localization results with submeter accuracy in different driving conditions.


2020 ◽  
Vol 9 (4) ◽  
pp. 267 ◽  
Author(s):  
Da Li ◽  
Yingke Lei ◽  
Xin Li ◽  
Haichuan Zhang

Wi-Fi and magnetic field fingerprinting-based localization have gained increased attention owing to their satisfactory accuracy and global availability. The common signal-based fingerprint localization deteriorates due to well-known signal fluctuations. In this paper, we proposed a Wi-Fi and magnetic field-based localization system based on deep learning. Owing to the low discernibility of magnetic field strength (MFS) in large areas, the unsupervised learning density peak clustering algorithm based on the comparison distance (CDPC) algorithm is first used to pick up several center points of MFS as the geotagged features to assist localization. Considering the state-of-the-art application of deep learning in image classification, we design a location fingerprint image using Wi-Fi and magnetic field fingerprints for localization. Localization is casted in a proposed deep residual network (Resnet) that is capable of learning key features from a massive fingerprint image database. To further enhance localization accuracy, by leveraging the prior information of the pre-trained Resnet coarse localizer, an MLP-based transfer learning fine localizer is introduced to fine-tune the coarse localizer. Additionally, we dynamically adjusted the learning rate (LR) and adopted several data enhancement methods to increase the robustness of our localization system. Experimental results show that the proposed system leads to satisfactory localization performance both in indoor and outdoor environments.


Cancers ◽  
2019 ◽  
Vol 11 (1) ◽  
pp. 111 ◽  
Author(s):  
Gopal S. Tandel ◽  
Mainak Biswas ◽  
Omprakash G. Kakde ◽  
Ashish Tiwari ◽  
Harman S. Suri ◽  
...  

A World Health Organization (WHO) Feb 2018 report has recently shown that mortality rate due to brain or central nervous system (CNS) cancer is the highest in the Asian continent. It is of critical importance that cancer be detected earlier so that many of these lives can be saved. Cancer grading is an important aspect for targeted therapy. As cancer diagnosis is highly invasive, time consuming and expensive, there is an immediate requirement to develop a non-invasive, cost-effective and efficient tools for brain cancer characterization and grade estimation. Brain scans using magnetic resonance imaging (MRI), computed tomography (CT), as well as other imaging modalities, are fast and safer methods for tumor detection. In this paper, we tried to summarize the pathophysiology of brain cancer, imaging modalities of brain cancer and automatic computer assisted methods for brain cancer characterization in a machine and deep learning paradigm. Another objective of this paper is to find the current issues in existing engineering methods and also project a future paradigm. Further, we have highlighted the relationship between brain cancer and other brain disorders like stroke, Alzheimer’s, Parkinson’s, and Wilson’s disease, leukoriaosis, and other neurological disorders in the context of machine learning and the deep learning paradigm.


BMJ Open ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. e035757
Author(s):  
Chenyang Zhao ◽  
Mengsu Xiao ◽  
He Liu ◽  
Ming Wang ◽  
Hongyan Wang ◽  
...  

ObjectiveThe aim of the study is to explore the potential value of S-Detect for residents-in-training, a computer-assisted diagnosis system based on deep learning (DL) algorithm.MethodsThe study was designed as a cross-sectional study. Routine breast ultrasound examinations were conducted by an experienced radiologist. The ultrasonic images of the lesions were retrospectively assessed by five residents-in-training according to the Breast Imaging Report and Data System (BI-RADS) lexicon, and a dichotomic classification of the lesions was provided by S-Detect. The diagnostic performances of S-Detect and the five residents were measured and compared using the pathological results as the gold standard. The category 4a lesions assessed by the residents were downgraded to possibly benign as classified by S-Detect. The diagnostic performance of the integrated results was compared with the original results of the residents.ParticipantsA total of 195 focal breast lesions were consecutively enrolled, including 82 malignant lesions and 113 benign lesions.ResultsS-Detect presented higher specificity (77.88%) and area under the curve (AUC) (0.82) than the residents (specificity: 19.47%–48.67%, AUC: 0.62–0.74). A total of 24, 31, 38, 32 and 42 identified as BI-RADS 4a lesions by residents 1, 2, 3, 4 and 5 were downgraded to possibly benign lesions by S-Detect, respectively. Among these downgraded lesions, 24, 28, 35, 30 and 40 lesions were proven to be pathologically benign, respectively. After combining the residents' results with the results of the software in category 4a lesions, the specificity and AUC of the five residents significantly improved (specificity: 46.02%–76.11%, AUC: 0.71–0.85, p<0.001). The intraclass correlation coefficient of the five residents also increased after integration (from 0.480 to 0.643).ConclusionsWith the help of the DL software, the specificity, overall diagnostic performance and interobserver agreement of the residents greatly improved. The software can be used as adjunctive tool for residents-in-training, downgrading 4a lesions to possibly benign and reducing unnecessary biopsies.


AI ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 166-179 ◽  
Author(s):  
Ziyang Tang ◽  
Xiang Liu ◽  
Hanlin Chen ◽  
Joseph Hupy ◽  
Baijian Yang

Unmanned Aerial Systems, hereafter referred to as UAS, are of great use in hazard events such as wildfire due to their ability to provide high-resolution video imagery over areas deemed too dangerous for manned aircraft and ground crews. This aerial perspective allows for identification of ground-based hazards such as spot fires and fire lines, and to communicate this information with fire fighting crews. Current technology relies on visual interpretation of UAS imagery, with little to no computer-assisted automatic detection. With the help of big labeled data and the significant increase of computing power, deep learning has seen great successes on object detection with fixed patterns, such as people and vehicles. However, little has been done for objects, such as spot fires, with amorphous and irregular shapes. Additional challenges arise when data are collected via UAS as high-resolution aerial images or videos; an ample solution must provide reasonable accuracy with low delays. In this paper, we examined 4K ( 3840 × 2160 ) videos collected by UAS from a controlled burn and created a set of labeled video sets to be shared for public use. We introduce a coarse-to-fine framework to auto-detect wildfires that are sparse, small, and irregularly-shaped. The coarse detector adaptively selects the sub-regions that are likely to contain the objects of interest while the fine detector passes only the details of the sub-regions, rather than the entire 4K region, for further scrutiny. The proposed two-phase learning therefore greatly reduced time overhead and is capable of maintaining high accuracy. Compared against the real-time one-stage object backbone of YoloV3, the proposed methods improved the mean average precision(mAP) from 0 . 29 to 0 . 67 , with an average inference speed of 7.44 frames per second. Limitations and future work are discussed with regard to the design and the experiment results.


Sign in / Sign up

Export Citation Format

Share Document