Signal denoising of viral particle in wide-field photon scattering parametric images using deep learning

2021 ◽  
pp. 127463
Author(s):  
Hanwen Zhao ◽  
Bin Ni ◽  
WeiPing Liu ◽  
Xiao Jin ◽  
Heng Zhang ◽  
...  
2020 ◽  
Vol 9 (8) ◽  
pp. 2537
Author(s):  
Joan M. Nunez do Rio ◽  
Piyali Sen ◽  
Rajna Rasheed ◽  
Akanksha Bagchi ◽  
Luke Nicholson ◽  
...  

Reliable outcome measures are required for clinical trials investigating novel agents for preventing progression of capillary non-perfusion (CNP) in retinal vascular diseases. Currently, accurate quantification of topographical distribution of CNP on ultrawide field fluorescein angiography (UWF-FA) by retinal experts is subjective and lack standardisation. A U-net style network was trained to extract a dense segmentation of CNP from a newly created dataset of 75 UWF-FA images. A subset of 20 images was also segmented by a second expert grader for inter-grader reliability evaluation. Further, a circular grid centred on the FAZ was used to provide standardised CNP distribution analysis. The model for dense segmentation was five-fold cross-validated achieving area under the receiving operating characteristic of 0.82 (0.03) and area under precision-recall curve 0.73 (0.05). Inter-grader assessment on the 20 image subset achieves: precision 59.34 (10.92), recall 76.99 (12.5), and dice similarity coefficient (DSC) 65.51 (4.91), and the centred operating point of the automated model reached: precision 64.41 (13.66), recall 70.02 (16.2), and DSC 66.09 (13.32). Agreement of CNP grid assessment reached: Kappa 0.55 (0.03), perfused intraclass correlation (ICC) 0.89 (0.77, 0.93), non-perfused ICC 0.86 (0.73, 0.92), inter-grader agreement of CNP grid assessment values are Kappa 0.43 (0.03), perfused ICC 0.70 (0.48, 0.83), non-perfused ICC 0.71 (0.48, 0.83). Automated dense segmentation of CNP in UWF-FA images achieves performance levels comparable to inter-grader agreement values. A grid placed on the deep learning-based automatic segmentation of CNP generates a reliable and quantifiable method of measurement of CNP, to overcome the subjectivity of human graders.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5310
Author(s):  
Lai Kang ◽  
Yingmei Wei ◽  
Jie Jiang ◽  
Yuxiang Xie

Cylindrical panorama stitching is able to generate high resolution images of a scene with a wide field-of-view (FOV), making it a useful scene representation for applications like environmental sensing and robot localization. Traditional image stitching methods based on hand-crafted features are effective for constructing a cylindrical panorama from a sequence of images in the case when there are sufficient reliable features in the scene. However, these methods are unable to handle low-texture environments where no reliable feature correspondence can be established. This paper proposes a novel two-step image alignment method based on deep learning and iterative optimization to address the above issue. In particular, a light-weight end-to-end trainable convolutional neural network (CNN) architecture called ShiftNet is proposed to estimate the initial shifts between images, which is further optimized in a sub-pixel refinement procedure based on a specified camera motion model. Extensive experiments on a synthetic dataset, rendered photo-realistic images, and real images were carried out to evaluate the performance of our proposed method. Both qualitative and quantitative experimental results demonstrate that cylindrical panorama stitching based on our proposed image alignment method leads to significant improvements over traditional feature based methods and recent deep learning based methods for challenging low-texture environments.


2021 ◽  
Author(s):  
Martin Žofka ◽  
Linh Thuy Nguyen ◽  
Eva Mašátová ◽  
Petra Matoušková

Poor efficacy of some anthelmintics and rising concerns about the widespread drug resistance have highlighted the need for new drug discovery. The parasitic nematode Haemonchus contortus is an important model organism widely used for studies of drug resistance and drug screening with the current gold standard being the motility assay. We applied a deep learning approach Mask R-CNN for analysing motility videos and compared it to other commonly used algorithms with different levels of complexity, namely Wiggle Index and Wide Field-of-View Nematode Tracking Platform. Mask R-CNN consistently outperformed the other algorithms in terms of the forecast precision across the videos containing varying rates of motile worms with a mean absolute error of 5.6%. Using Mask R-CNN for motility assays confirmed the common problem of algorithms that use Non-Maximum Suppression in detecting overlapping objects, which negatively impacted the overall precision. The use of intersect over union (IoU) as a measure of the classification of motile / non-motile instances had an overall accuracy of 89%. In comparison to the existing methods evaluated here, Mask R-CNN performed better and we can anticipate that this method will broaden the number of possible approaches to video analysis of worm motility. IoU has shown promise as a good metric for evaluating motility of individual worms.


Sign in / Sign up

Export Citation Format

Share Document