scholarly journals Radiologist-Level Performance Using Deep Learning for Segmentation of Breast Cancers on MRI

Author(s):  
Lukas Hirsch ◽  
Yu Huang ◽  
Shaojun Luo ◽  
Carolina Rossi Saccarelli ◽  
Roberto Lo Gullo ◽  
...  
Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1798 ◽  
Author(s):  
Cristina Jácome ◽  
Johan Ravn ◽  
Einar Holsbø ◽  
Juan Carlos Aviles-Solis ◽  
Hasse Melbye ◽  
...  

We applied deep learning to create an algorithm for breathing phase detection in lung sound recordings, and we compared the breathing phases detected by the algorithm and manually annotated by two experienced lung sound researchers. Our algorithm uses a convolutional neural network with spectrograms as the features, removing the need to specify features explicitly. We trained and evaluated the algorithm using three subsets that are larger than previously seen in the literature. We evaluated the performance of the method using two methods. First, discrete count of agreed breathing phases (using 50% overlap between a pair of boxes), shows a mean agreement with lung sound experts of 97% for inspiration and 87% for expiration. Second, the fraction of time of agreement (in seconds) gives higher pseudo-kappa values for inspiration (0.73–0.88) than expiration (0.63–0.84), showing an average sensitivity of 97% and an average specificity of 84%. With both evaluation methods, the agreement between the annotators and the algorithm shows human level performance for the algorithm. The developed algorithm is valid for detecting breathing phases in lung sound recordings.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
Pietro A Cicalese ◽  
Syed A Rizvi ◽  
Candice Roufosse ◽  
Ibrahim Batal ◽  
Martin Hellmich ◽  
...  

Abstract Background and Aims Antibody-mediated rejection (AMR) is among the most common causes for kidney transplant loss. The histological diagnosis is hampered by significant intra- and interobserver variability. Training a deep learning classifier for the recognition of AMR on glomerular transections as the most decisive compartment could establish a reliable and perfectly reproducible diagnostic method. Method We identified 48 biopsies with AMR (all positive for donor-specific antibody) and 38 biopsies without AMR according to Banff 2017 from our archive. Photographs were taken from all non-globally sclerosed glomeruli on two PAS-stained level sections, yielding a total of 1,655 images as a training set. 1,503 images could be labeled by three experienced nephropathologists conclusively as AMR or non-AMR in a blinded fashion. We trained a DenseNet-121 classifier (pre-trained on ImageNet) with basic online augmentation. In addition, we implemented StyPath++, a data augmentation algorithm that leverages a style transfer mechanism, addressing significant domain shifts in histopathology. Each sample was assigned a consensus label generated by the pathologists. Results Five-fold cross validation schemes produced a weighted glomerular level performance of 88.1%, exceeding the baseline performance by 5%. The improved generalization ability of the StyPath++ augmented model shows that it is possible to construct reliable glomerular classification algorithms with scarce datasets. Conclusion We created a deep learning classifier with excellent performance and reproducibility for the diagnosis of AMR on glomerular transections. We plan to expand the training set, including challenging cases of differential diagnoses like glomerulonephritis or other glomerulopathies. We are also interested in external clinicopathological datasets to further validate our results.


2020 ◽  
pp. 112067212097734
Author(s):  
Delaram Mirzania ◽  
Atalie C Thompson ◽  
Kelly W Muir

Glaucoma is the leading cause of irreversible blindness and disability worldwide. Nevertheless, the majority of patients do not know they have the disease and detection of glaucoma progression using standard technology remains a challenge in clinical practice. Artificial intelligence (AI) is an expanding field that offers the potential to improve diagnosis and screening for glaucoma with minimal reliance on human input. Deep learning (DL) algorithms have risen to the forefront of AI by providing nearly human-level performance, at times exceeding the performance of humans for detection of glaucoma on structural and functional tests. A succinct summary of present studies and challenges to be addressed in this field is needed. Following PRISMA guidelines, we conducted a systematic review of studies that applied DL methods for detection of glaucoma using color fundus photographs, optical coherence tomography (OCT), or standard automated perimetry (SAP). In this review article we describe recent advances in DL as applied to the diagnosis of glaucoma and glaucoma progression for application in screening and clinical settings, as well as the challenges that remain when applying this novel technique in glaucoma.


2018 ◽  
Author(s):  
Karim Rajaei ◽  
Yalda Mohsenzadeh ◽  
Reza Ebrahimpour ◽  
Seyed-Mahdi Khaligh-Razavi

AbstractCore object recognition, the ability to rapidly recognize objects despite variations in their appearance, is largely solved through the feedforward processing of visual information. Deep neural networks are shown to achieve human-level performance in these tasks, and explain the primate brain representation. On the other hand, object recognition under more challenging conditions (i.e. beyond the core recognition problem) is less characterized. One such example is object recognition under occlusion. It is unclear to what extent feedforward and recurrent processes contribute in object recognition under occlusion. Furthermore, we do not know whether the conventional deep neural networks, such as AlexNet, which were shown to be successful in solving core object recognition, can perform similarly well in problems that go beyond the core recognition. Here, we characterize neural dynamics of object recognition under occlusion, using magnetoencephalography (MEG), while participants were presented with images of objects with various levels of occlusion. We provide evidence from multivariate analysis of MEG data, behavioral data, and computational modelling, demonstrating an essential role for recurrent processes in object recognition under occlusion. Furthermore, the computational model with local recurrent connections, used here, suggests a mechanistic explanation of how the human brain might be solving this problem.Author SummaryIn recent years, deep-learning-based computer vision algorithms have been able to achieve human-level performance in several object recognition tasks. This has also contributed in our understanding of how our brain may be solving these recognition tasks. However, object recognition under more challenging conditions, such as occlusion, is less characterized. Temporal dynamics of object recognition under occlusion is largely unknown in the human brain. Furthermore, we do not know if the previously successful deep-learning algorithms can similarly achieve human-level performance in these more challenging object recognition tasks. By linking brain data with behavior, and computational modeling, we characterized temporal dynamics of object recognition under occlusion, and proposed a computational mechanism that explains both behavioral and the neural data in humans. This provides a plausible mechanistic explanation for how our brain might be solving object recognition under more challenging conditions.


2020 ◽  
Vol 59 (12) ◽  
pp. 2057-2073
Author(s):  
Yingkai Sha ◽  
David John Gagne II ◽  
Gregory West ◽  
Roland Stull

AbstractMany statistical downscaling methods require observational inputs and expert knowledge and thus cannot be generalized well across different regions. Convolutional neural networks (CNNs) are deep-learning models that have generalization abilities for various applications. In this research, we modify UNet, a semantic-segmentation CNN, and apply it to the downscaling of daily maximum/minimum 2-m temperature (TMAX/TMIN) over the western continental United States from 0.25° to 4-km grid spacings. We select high-resolution (HR) elevation, low-resolution (LR) elevation, and LR TMAX/TMIN as inputs; train UNet using Parameter–Elevation Regressions on Independent Slopes Model (PRISM) data over the south- and central-western United States from 2015 to 2018; and test it independently over both the training domains and the northwestern United States from 2018 to 2019. We found that the original UNet cannot generate enough fine-grained spatial details when transferred to the new northwestern U.S. domain. In response, we modified the original UNet by assigning an extra HR elevation output branch/loss function and training the modified UNet to reproduce both the supervised HR TMAX/TMIN and the unsupervised HR elevation. This improvement is named “UNet-Autoencoder (AE).” UNet-AE supports semisupervised model fine-tuning for unseen domains and showed better gridpoint-level performance with more than 10% mean absolute error (MAE) reduction relative to the original UNet. On the basis of its performance relative to the 4-km PRISM, UNet-AE is a good option to provide generalizable downscaling for regions that are underrepresented by observations.


Author(s):  
Muhammad Khaerul Naim Mursalim ◽  
Ade Kurniawan

COVID-19, which originated from Wuhan, rapidly spread throughout the world and became a public health crisis. Recognizing the positive cases at the earliest stage was crucial in order to restrain the spread of this virus and to perform medical treatment quickly for patients affected. However, the limited supply of RT-PCR as a diagnosis tool caused greatly delay in obtaining examination results of the suspected patients. Previous research stated that using radiologic images could be utilized to detect COVID-19 before the symptoms appeared. With the rapid development of Artificial intelligence in medical imaging in recent years, deep learning as the core of this technology could achieve human-level-performance in diagnostic accuracy. In this paper, deep learning was implemented to detect COVID-19 using a chest X-ray dataset. The proposed model employed a multi-kernel convolution neural network (CNN) block combined with pre-trained ResNet-34 to overcome an imbalanced dataset. The model block adopted different kernel sizes as follows 1x1, 3x3, 5x5, and 7x7. The findings show that the proposed model is capable of performing binary and three class classification tasks with an accuracy of 100% and 93.51% in the validation phase and 95% and 83% in the test phase, respectively.


Author(s):  
Chandra Vadhana ◽  
Shanthi Bala P. ◽  
Immanuel Zion Ramdinthara

Deep learning models can achieve more accuracy sometimes that exceed human-level performance. It is crucial for safety-critical applications such as driverless cars, aerospace, defence, medical research, and industrial automation. Most of the deep learning methods mimic the neural network. It has many hidden layers and creates patterns for decision making and it is a subset of machine learning that performs end-to-end learning and has the capability to learn unsupervised data and also provides very flexible, learnable framework for representing the visual and linguistic information. Deep learning has greatly changed the way and computing devices processes human-centric content such as speech, image recognition, and natural language processing. Deep learning plays a major role in IoT-related services. The amalgamation of deep learning to the IoT environment makes the complex sensing and recognition tasks easier. It helps to automatically identify patterns and detect anomalies that are generated by IoT devices. This chapter discusses the impact of deep learning in the IoT environment.


2021 ◽  
Author(s):  
Noah F. Greenwald ◽  
Geneva Miller ◽  
Erick Moen ◽  
Alex Kong ◽  
Adam Kagel ◽  
...  

AbstractUnderstanding the spatial organization of tissues is of critical importance for both basic and translational research. While recent advances in tissue imaging are opening an exciting new window into the biology of human tissues, interpreting the data that they create is a significant computational challenge. Cell segmentation, the task of uniquely identifying each cell in an image, remains a substantial barrier for tissue imaging, as existing approaches are inaccurate or require a substantial amount of manual curation to yield useful results. Here, we addressed the problem of cell segmentation in tissue imaging data through large-scale data annotation and deep learning. We constructed TissueNet, an image dataset containing >1 million paired whole-cell and nuclear annotations for tissue images from nine organs and six imaging platforms. We created Mesmer, a deep learning-enabled segmentation algorithm trained on TissueNet that performs nuclear and whole-cell segmentation in tissue imaging data. We demonstrated that Mesmer has better speed and accuracy than previous methods, generalizes to the full diversity of tissue types and imaging platforms in TissueNet, and achieves human-level performance for whole-cell segmentation. Mesmer enabled the automated extraction of key cellular features, such as subcellular localization of protein signal, which was challenging with previous approaches. We further showed that Mesmer could be adapted to harness cell lineage information present in highly multiplexed datasets. We used this enhanced version to quantify cell morphology changes during human gestation. All underlying code and models are released with permissive licenses as a community resource.


Sign in / Sign up

Export Citation Format

Share Document