scholarly journals Label-free SARS-CoV-2 detection and classification using phase imaging with computational specificity

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Neha Goswami ◽  
Yuchen R. He ◽  
Yu-Heng Deng ◽  
Chamteut Oh ◽  
Nahil Sobh ◽  
...  

AbstractEfforts to mitigate the COVID-19 crisis revealed that fast, accurate, and scalable testing is crucial for curbing the current impact and that of future pandemics. We propose an optical method for directly imaging unlabeled viral particles and using deep learning for detection and classification. An ultrasensitive interferometric method was used to image four virus types with nanoscale optical path-length sensitivity. Pairing these data with fluorescence images for ground truth, we trained semantic segmentation models based on U-Net, a particular type of convolutional neural network. The trained network was applied to classify the viruses from the interferometric images only, containing simultaneously SARS-CoV-2, H1N1 (influenza-A virus), HAdV (adenovirus), and ZIKV (Zika virus). Remarkably, due to the nanoscale sensitivity in the input data, the neural network was able to identify SARS-CoV-2 vs. the other viruses with 96% accuracy. The inference time for each image is 60 ms, on a common graphic-processing unit. This approach of directly imaging unlabeled viral particles may provide an extremely fast test, of less than a minute per patient. As the imaging instrument operates on regular glass slides, we envision this method as potentially testing on patient breath condensates. The necessary high throughput can be achieved by translating concepts from digital pathology, where a microscope can scan hundreds of slides automatically.

2020 ◽  
Author(s):  
Neha Goswami ◽  
Yuchen R. He ◽  
Yu-Heng Deng ◽  
Chamteut Oh ◽  
Nahil Sobh ◽  
...  

Efforts to mitigate the COVID-19 crisis revealed that fast, accurate, and scalable testing is crucial for curbing the current impact and that of future pandemics. We propose an optical method for directly imaging unlabeled viral particles and using deep learning for detection and classification. An ultrasensitive interferometric method was used to image four virus types with nanoscale optical pathlength sensitivity. Pairing these data with fluorescence images for ground truth, we trained semantic segmentation models based on U-Net, a particular type of convolutional neural network. The trained network was applied to classify the viruses from the interferometric images only, containing simultaneously SARS-CoV-2, H1N1 (influenza-A), HAdV (adenovirus), and ZIKV (Zika). Remarkably, due to the nanoscale sensitivity in the input data, the neural network was able to identify SARS-CoV-2 vs. the other viruses with 96% accuracy. The inference time for each image is 60 ms, on a common graphic processing unit. This approach of directly imaging unlabeled viral particles may provide an extremely fast test, of less than a minute per patient. As the imaging instrument operates on regular glass slides, we envision this method as potentially testing on patient breath condensates. The necessary high throughput can be achieved by translating concepts from digital pathology, where a microscope can scan hundreds of slides automatically.


2021 ◽  
Author(s):  
Masayoshi Sakakura ◽  
Gabriel Popescu ◽  
Andre Kajdacsy-Balla ◽  
Virgilia Macias

Evaluating the tissue collagen content in addition to the epithelial morphology has been proven to offer complementary information in histopathology, especially in disease stratification and patient survivability prediction. One imaging modality widely used for this purpose is second harmonic generation microscopy (SHGM), which reports on the nonlinear susceptibility associated with the collagen fibers. Another method is polarization light microscopy (PLM) combined with picrosirius-red (PSR) tissue staining. However, SHGM requires expensive equipment and provides limited throughput, while PLM and PSR staining are not part of the routine pathology workflow. Here, we advance phase imaging with computational specificity (PICS) to computationally infer the collagen distribution of unlabeled tissue, with high specificity. PICS utilizes deep learning to translate quantitative phase images (QPI) into corresponding PSR images with high accuracy and speed. Our results indicate that the distributions of collagen fiber orientation, length, and straightness reported by PICS closely match the ones from ground truth.


Entropy ◽  
2020 ◽  
Vol 22 (6) ◽  
pp. 657 ◽  
Author(s):  
Maria Delgado-Ortet ◽  
Angel Molina ◽  
Santiago Alférez ◽  
José Rodellar ◽  
Anna Merino

Malaria is an endemic life-threating disease caused by the unicellular protozoan parasites of the genus Plasmodium. Confirming the presence of parasites early in all malaria cases ensures species-specific antimalarial treatment, reducing the mortality rate, and points to other illnesses in negative cases. However, the gold standard remains the light microscopy of May-Grünwald–Giemsa (MGG)-stained thin and thick peripheral blood (PB) films. This is a time-consuming procedure, dependent on a pathologist’s skills, meaning that healthcare providers may encounter difficulty in diagnosing malaria in places where it is not endemic. This work presents a novel three-stage pipeline to (1) segment erythrocytes, (2) crop and mask them, and (3) classify them into malaria infected or not. The first and third steps involved the design, training, validation and testing of a Segmentation Neural Network and a Convolutional Neural Network from scratch using a Graphic Processing Unit. Segmentation achieved a global accuracy of 93.72% over the test set and the specificity for malaria detection in red blood cells (RBCs) was 87.04%. This work shows the potential that deep learning has in the digital pathology field and opens the way for future improvements, as well as for broadening the use of the created networks.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Jingxi Li ◽  
Jason Garfinkel ◽  
Xiaoran Zhang ◽  
Di Wu ◽  
Yijie Zhang ◽  
...  

AbstractAn invasive biopsy followed by histological staining is the benchmark for pathological diagnosis of skin tumors. The process is cumbersome and time-consuming, often leading to unnecessary biopsies and scars. Emerging noninvasive optical technologies such as reflectance confocal microscopy (RCM) can provide label-free, cellular-level resolution, in vivo images of skin without performing a biopsy. Although RCM is a useful diagnostic tool, it requires specialized training because the acquired images are grayscale, lack nuclear features, and are difficult to correlate with tissue pathology. Here, we present a deep learning-based framework that uses a convolutional neural network to rapidly transform in vivo RCM images of unstained skin into virtually-stained hematoxylin and eosin-like images with microscopic resolution, enabling visualization of the epidermis, dermal-epidermal junction, and superficial dermis layers. The network was trained under an adversarial learning scheme, which takes ex vivo RCM images of excised unstained/label-free tissue as inputs and uses the microscopic images of the same tissue labeled with acetic acid nuclear contrast staining as the ground truth. We show that this trained neural network can be used to rapidly perform virtual histology of in vivo, label-free RCM images of normal skin structure, basal cell carcinoma, and melanocytic nevi with pigmented melanocytes, demonstrating similar histological features to traditional histology from the same excised tissue. This application of deep learning-based virtual staining to noninvasive imaging technologies may permit more rapid diagnoses of malignant skin neoplasms and reduce invasive skin biopsies.


2020 ◽  
Author(s):  
David Schuhmacher ◽  
Klaus Gerwert ◽  
Axel Mosig

AbstractIn many settings in digital pathology or radiology, it is of predominant importance to train classifiers that can segment disease-associated regions in medical images. While numerous deep learning approaches, most notably U-Nets, exist to learn segmentations, these approaches typically require reference segmentations as training data. As a consequence, obtaining pixel level annotations of histopathological samples has become a major bottleneck to establish segmentation learning approaches. Our contribution introduces a neural network approach to avoid the annotation bottleneck in the first place: our approach requires two-class labels such as cancer vs. healthy at the sample level only. Using these sample-labels, a meta-network is trained that infers a segmenting neural network which will segment the disease-associated region (e.g. tumor) that is present in the cancer samples, but not in the healthy samples. This process results in a network, e.g. a U-Net, that can segment tumor regions in arbitrary further samples of the same type.We establish and validate our approach in the context of digital label-free pathology, where hyperspectral infrared microscopy is used to segment and characterize the disease status of histopathological samples. Trained on a data set comprising infrared microscopic images of 100 tissue microarray spots labelled as either cancerous or cancer-free, the approach yields a U-Net that reliably identifies tumor regions or the absence of tumor in an independent test set involving 40 samples.While our present work is focused on training a U-Net for infrared microscopic images, the approach is generic in the sense that it can be adapted to other image modalities and essentially arbitrary segmenting network topologies.


Author(s):  
Muhammad Hanif Ahmad Nizar ◽  
Chow Khuen Chan ◽  
Azira Khalil ◽  
Ahmad Khairuddin Mohamed Yusof ◽  
Khin Wee Lai

Background: Valvular heart disease is a serious disease leading to mortality and increasing medical care cost. The aortic valve is the most common valve affected by this disease. Doctors rely on echocardiogram for diagnosing and evaluating valvular heart disease. However, the images from echocardiogram are poor in comparison to Computerized Tomography and Magnetic Resonance Imaging scan. This study proposes the development of Convolutional Neural Networks (CNN) that can function optimally during a live echocardiographic examination for detection of the aortic valve. An automated detection system in an echocardiogram will improve the accuracy of medical diagnosis and can provide further medical analysis from the resulting detection. Methods: Two detection architectures, Single Shot Multibox Detector (SSD) and Faster Regional based Convolutional Neural Network (R-CNN) with various feature extractors were trained on echocardiography images from 33 patients. Thereafter, the models were tested on 10 echocardiography videos. Results: Faster R-CNN Inception v2 had shown the highest accuracy (98.6%) followed closely by SSD Mobilenet v2. In terms of speed, SSD Mobilenet v2 resulted in a loss of 46.81% in framesper- second (fps) during real-time detection but managed to perform better than the other neural network models. Additionally, SSD Mobilenet v2 used the least amount of Graphic Processing Unit (GPU) but the Central Processing Unit (CPU) usage was relatively similar throughout all models. Conclusion: Our findings provide a foundation for implementing a convolutional detection system to echocardiography for medical purposes.


Author(s):  
Liang Kim Meng ◽  
Azira Khalil ◽  
Muhamad Hanif Ahmad Nizar ◽  
Maryam Kamarun Nisham ◽  
Belinda Pingguan-Murphy ◽  
...  

Background: Bone Age Assessment (BAA) refers to a clinical procedure that aims to identify a discrepancy between biological and chronological age of an individual by assessing the bone age growth. Currently, there are two main methods of executing BAA which are known as Greulich-Pyle and Tanner-Whitehouse techniques. Both techniques involve a manual and qualitative assessment of hand and wrist radiographs, resulting in intra and inter-operator variability accuracy and time-consuming. An automatic segmentation can be applied to the radiographs, providing the physician with more accurate delineation of the carpal bone and accurate quantitative analysis. Methods: In this study, we proposed an image feature extraction technique based on image segmentation with the fully convolutional neural network with eight stride pixel (FCN-8). A total of 290 radiographic images including both female and the male subject of age ranging from 0 to 18 were manually segmented and trained using FCN-8. Results and Conclusion: The results exhibit a high training accuracy value of 99.68% and a loss rate of 0.008619 for 50 epochs of training. The experiments compared 58 images against the gold standard ground truth images. The accuracy of our fully automated segmentation technique is 0.78 ± 0.06, 1.56 ±0.30 mm and 98.02% in terms of Dice Coefficient, Hausdorff Distance, and overall qualitative carpal recognition accuracy, respectively.


2021 ◽  
Vol 18 (1) ◽  
pp. 172988142199332
Author(s):  
Xintao Ding ◽  
Boquan Li ◽  
Jinbao Wang

Indoor object detection is a very demanding and important task for robot applications. Object knowledge, such as two-dimensional (2D) shape and depth information, may be helpful for detection. In this article, we focus on region-based convolutional neural network (CNN) detector and propose a geometric property-based Faster R-CNN method (GP-Faster) for indoor object detection. GP-Faster incorporates geometric property in Faster R-CNN to improve the detection performance. In detail, we first use mesh grids that are the intersections of direct and inverse proportion functions to generate appropriate anchors for indoor objects. After the anchors are regressed to the regions of interest produced by a region proposal network (RPN-RoIs), we then use 2D geometric constraints to refine the RPN-RoIs, in which the 2D constraint of every classification is a convex hull region enclosing the width and height coordinates of the ground-truth boxes on the training set. Comparison experiments are implemented on two indoor datasets SUN2012 and NYUv2. Since the depth information is available in NYUv2, we involve depth constraints in GP-Faster and propose 3D geometric property-based Faster R-CNN (DGP-Faster) on NYUv2. The experimental results show that both GP-Faster and DGP-Faster increase the performance of the mean average precision.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


Sign in / Sign up

Export Citation Format

Share Document